No, the survey in the title doesn’t exist, but it was the easiest way to summarize the problem of response rates. The reason why the hypothetical survey in the title got its result was because they first had to ask if they wanted to take a survey before they asked the survey question. Thus, the survey reaches the incorrect result that nobody minds taking surveys.
Response rates are usually buried in the footnotes of the full report. They are almost never mentioned in the press summaries and thus never are part of the media coverage. Could you imagine a headline reading “Senator Gassbag ahead 55%-45% among likely voters who didn’t hang up on a robo-call”.
Survey companies do have techniques for adjusting for differentials in response rates. For example, if it is known that the population of a city is 45 percent Hispanic and only 30 percent of the respondents are, then the answers from the Hispanic respondents might be weighted heavier to adjust for the lower Hispanic response level. But, that makes the presumption that the Hispanics who did respond to the poll are exactly like the Hispanics who didn’t. Obviously they are different in at least one respect – they don’t like taking polls! Bear in mind that this cannot simply be dismissed as part of the margin of error. The calculation of margin of error presumes a random sampling.
The assumption made by most pollsters is the unknown differences between responders and non-responders are not great. Absent this assumption, the big question is what other traits might correlate with not being willing to participate in a poll?
Those who habitually never participate in polls are perhaps the least understood segment of any population. What are you going to do, take a poll of them?
“Hello, would you mind taking a poll today”
“Thank you but this poll is only about people who don’t want to speak with us”
There are experiments that one might be able to design that could be informative about people who don’t participate in polls. For example if you had a room full of people that you knew were 30 percent Democrats, 30 percent Republicans and 40 percent Independents, you could ask them as they were leaving if they wanted to take a poll to get a correlation between willingness to participate in polls and political party. But how do you know the contents of the room beforehand without asking?
Consideration of possible bias caused by non-respondents should occur before any poll result is finalized. In some cases, the assumption that responders and non-responders are not different is not difficult to fathom. But, this is not always the case. For example, a recent poll attempted to address attitudes regarding online privacy. For this subject, one might reasonably suspect that people most concerned about the privacy of their personal information online might also be less willing to talk to a random stranger on the telephone about themselves. Thus, that poll should have a significant suspicion, beyond the margin of error, that the real level of concern regarding online privacy was not accurately reflected in the poll.
Polling remains a useful research tool in a great many cases. As with any research, the poll needs to be scrutinized before its conclusions can be accepted.