Good Surveys, Bad Surveys: How to Tell the Difference

by singaporearmchaircritic


(Image by Andrew Low)

Are you perturbed by the findings of a flurry of opinion polls published recently  in the mainstream media?

To cite a few: a Nielsen survey says about 70% of Singapore consumers are “unfazed” by the increases in food prices; a survey conducted by the Institute of Policy Studies finds a staggering 98.5% agreeing that National Service is necessary; 94% of commuters say they would give up their seat to those who need it more…

If you are puzzled why you consistently turn out to be the minority 30% or 1.5% that is out of sync with the rest of the population, fret not.

The problem may not lie with you but the survey itself.

Here’s a number of ways a survey or opinion poll may go wrong, intentionally or unintentionally:

1. The sample is non-representative

“Sample” in the jargon of statistics or quantitative research means “a group of units selected from a larger group” or a subset of a larger group.

This “larger group” may refer to an entire population of a country or the population residing in a region, but it may also refer to any community that is targeted by the surveyors.

For instance, some surveys aim to uncover the attitudes or habits or behavior of a specific group of people, such as the amount of time youths spend on the social media, or how the PMETs feel about their salary.

In order to come up with reliable results that may be generalized to the bigger group, it is paramount that the right sample is drawn.

This is to say that you do not interview the elderly to find out how much time youngsters spend on Twitter, Facebook etc. (duh huh??), or ask employers how satisfied their employees are with their salary (of course my employees are immensely happy with their pay!).

Few surveys will make the fatal and easy-to-spot mistake of targeting the wrong group of people.

What we must be more wary of are spurious surveys that sample a non-representative sub-group but purport that their findings are representative of the larger group (i.e. you ask the residents of Bukit Timah about inflation and surmise that Singaporeans are generally unfazed by the rising cost of living).

This “selection bias” may be due to poor methodology, shortage of resources, or other more dubious reasons that you can think of (propaganda, for instance).

What do we mean by a  “representative” sample?

A representative sample is often drawn randomly from the larger population.

“Random” in statistics jargon does not mean the usual idea of “made, done, happening, or chosen without method or conscious.”

In random sampling, everyone in the population has an equal chance of being selected. So in fact a lot of thought and method goes into the collection of a random sample.

Let’s look at the aforementioned Nielsen survey. It says that nearly 70% of Singapore consumers are “unfazed” by rising food prices.

Who are the people it polled?

From the scant information in the Straits Times report, we know that responses were collected via the Internet.

This means only those with Internet access may take part in the survey.

If you think about the profile of Internet users – younger, more affluent, more well-educated etc. – then it is easy to see what is very wrong with the survey.

To compound the problem, the group of people excluded from the poll, i.e. non- Internet users, coincides with the group of people who may find rising food prices hardest to cope with, i.e. the poor.

By leaving out this group of people, it’s no wonder the responses were largely positive.

Yet our Straits Times headline reads “Food price increase? Most Singaporeans say they can absorb it.”

The report goes on to assert that “Consumers in Singapore are relatively unfazed by the thought of food prices heading north, with 69 per cent indicating there is enough flexibility in their household budget to absorb a rise in food prices” (emphases mine).

Whether it’s sheer unprofessionalism or a deliberate bid to mislead readers is anybody’s guess.

2. The poll asks leading questions

Sometimes an opinion poll may have rigorously followed the rules and steps to derive a good, representative sample.

But it may fall short in many other ways.

For one, the questionnaire may be badly designed. Questions may be structured in a way that lead respondents to answer in a particular manner, hence skewing the survey results.

Take, for example, the REACH poll on how Singaporeans felt about the SMRT strike last year. Respondents were asked the extent to which they agree with the following statements on a scale of 1 to 5 (1 means strongly disagree, 3 is neutral, 5 means strongly agree):

The Government has done the right thing by taking time to ascertain the facts before labelling the action as an illegal strike on the second day.

If the bus drivers from China are found to have breached Singapore’s law, they should be punished to the full extent of the law, as Singapore has zero tolerance for illegal strikes.

The bus drivers from China were wrong to have held a strike, but SMRT also bears some responsibility for the situation as it did not manage the grievances of the Chinese well.

The problem with these statements may not be obvious at first glance. But the way they are phrased and constructed put respondents in a dilemma.

Take the first question. What if I agree that “The Government has done the right thing by taking time to ascertain the facts” but disagree with “labelling the action as an illegal strike”?

Similarly for the third question, what if I disagree that “The bus drivers from China were wrong to have held a strike,” but agree that “SMRT also bears some responsibility for the situation as it did not manage the grievances of the Chinese well”?

I suspect the results will be vastly different if the question is broken up into two parts: 1. Do you think the bus drivers from China were wrong to have held a strike? 2. Do you think SMRT also bears some responsibility for the situation as it did not manage the grievances of the Chinese well? (1 means strongly disagree, 3 is neutral, 5 means strongly agree).

Furthermore, the second question is obviously leading the respondent to “agree” with it by adding this phrase “as Singapore has zero tolerance for illegal strikes” at the end.

Intentionally or unintentionally, therefore, the REACH poll questions are poorly designed and hence the results may not accurately reflect public sentiments.

3. Respondents are not telling the truth

Under some circumstances, respondents may be lying or responding in a way that is socially or politically desirable.

In the U.S., for example, many White respondents will say they do not mind having an African American as their neighbor to avoid being viewed as a “racist” or “xenophobe.” However, that may not be what they really feel.

Similarly, if North Koreans are polled about how much they like and respect Kim Jong-un, you can be pretty sure that the results will be 100% positive.

The recent LTA survey to find out how “gracious” commuters are may suffer from the same problem.

In the poll, 98 per cent of respondents said they queue up and give way to alighting passengers, and 94% commuters said that they give up their seat to those who need it more.

To many of us commuters, the survey findings simply do not square with what we see in reality.

In other words, there is the likelihood that those polled in the LTA survey were simply conditioned to say the “right” thing. But this should not be taken as an indicator of how “gracious” commuters actually are.

One last word…

The above are some issues I have with opinion polls in Singapore.

Having said this, however, I am also peeved by a common and misplaced criticism of surveys seen on the Internet.

Netizens often intuitively hit out at what they perceive to be a small (in the range of x hundred) and hence under-represented sample size for our 5.4 million population.

However, determining the right sample size for a large population is a highly technical matter.

As long as the size of the total target population is large (i.e. more than 100,000), the sample size depends on technicalities as well as a host of other considerations, and NOT the target population size.

The table below shows sample sizes ranging from 100 to 1,000 (for large populations) corresponding to the margin of error and confidence level.


Here’s also an online calculator for your convenience.

(This blogpost first appeared on The Online Citizen).