**No.**

Here’s an example: A survey reports that 50% of respondents were satisfied with President X, with a margin of error of 3%. As you may have learned in Stat 101, this means that theoretically, if we repeated this survey an infinite number of times, then the range [47%, 53%] should encompass the “true value” of satisfaction with President X 95% of the time.

But what if, for example, most of President X’s supporters are young people, but the survey had far more old people in it? In other words, what if the survey was not representative of the population?

Then President X’s true support might be much higher, say 70% for example, in which case the margin of error tells you jack squat.

The difference between a survey estimate and the “true” value for the whole population of interest is called *bias*, and the margin of error tells you nothing about it. 70% isn’t anywhere near [47%, 53%].

A truly random sample should have *no bias*, but truly random samples of something as diffuse as the population of the Philippines are horrendously difficult, perhaps impossible, to get. So even the numbers that SWS and Pulse Asia put out, despite their best efforts, *may* be biased because of unknown factors.

The problem is that bias is difficult to measure. You’d have to know what the “true value” actually was, and you aren’t going to get that without a census – a survey of *literally everybody* in the population. Then you can use census numbers as benchmarks. For now, we can only trust that survey firms’ methodologies cover all bases.

——

The latest survey to come out in the news is a *non-probability online survey *of 1,200 registered voters aged 19-35, conducted by lobbying and campaign management firm Publicus Asia. I have previously made fun of Publicus in undiplomatic language, but in this case, while I would take the numbers with a grain of salt, there isn’t anything that strikes me as particularly deceptive about this latest effort. The term *non-probability survey* means that, rather than attempting a random sample (which would be considerably more expensive than what SWS and Pulse Asia usually do, considering that all 1,200 respondents have to be within a certain age group AND have to be registered to vote), they just put a poll somewhere online and let people register for the survey on their own volition. Another kind of non-probability survey would be those surveys you do in college where you stand somewhere with pleading Bambi eyes and try and get as many people as possible to fill out your paper form so you can meet your Psych 101 deadline.

Let’s assume that everyone who signed up for Publicus’s survey was indeed between age 19-35 and a registered voter (and a Filipino), which isn’t necessarily true because there isn’t any real way to check. The main issue with a non-probability sample is whether or not the resulting 1,200 people are fully representative of all Filipinos who are aged 19-35 and registered to vote. For example, do they have the same socio-economic breakdown? Is the ratio of males to females similar? Are they geographically distributed across the country? Do they have the same levels of education? And so on.

I bring this up because the term “margin of error” describes what would happen with repeated random samples all the way to infinity. Notice how Publicus Asia doesn’t report any margin of error? That’s correct – the margin of sampling error technically doesn’t apply to non-probability surveys. Pretending to have one would be deceptive, so it’s good that they don’t have any. There is also, of course, uncertainty over what the numbers would be if they did the same exact survey again, but the extent of that uncertainty is unknown.

If the non-probability sample has too many of a particular type of person compared to the rest of the Philippines, then its estimates may be biased. For example, Publicus’s survey was administered online, so the sample could overrepresent people with Internet access, which may also be correlated with socioeconomic status. This is not political bias – it’s just a statistical term.

Theoretically, a random sample can grab an adult Filipino from anywhere in the country, and so a large enough random sample will be representative on all these points, and will be *unbiased*. We don’t have that same assurance with a non-probability sample, where we don’t really have a clear sense of what caused someone to opt into the survey. Were they feeling bored? Did they visit a particular website that hosted the banner ads recruiting people into the survey? Why’d they click? Why did they *finish the entire thing?*

In practice,* *non-probability samples are not useless. Plenty of market researchers use non-probability samples mostly because they’re easy to do, but also because they aren’t super interested in population inference – they just want anyone who could potentially buy what they’re hawking. For those who do care about population inference, or making a sample look like the population, techniques such as applying nonresponse adjustments, calibration, and poststratification can produce accurate estimates out of non-probability samples. These same techniques are, in fact, also applied to “probability-based” surveys, because most surveys aren’t really random, even the ones done as rigorously as possible. SWS and Pulse Asia, for example, do a basic adjustment where after sampling 300 people from each of NCR, the rest of Luzon, Visayas and Mindanao, they adjust their numbers to match the actual percentages of people who live in those places – according to the census.

Many of these techniques require accurate census data. Really, the best way to advance survey research in the Philippines would be to start by beefing up the government’s capability to conduct the national census. Not only do you need that census to pull off fancy adjustments with your surveys, you also need that census to figure out whether what you’re doing is working in the first place.