The spheres are in commotion
The elements in harmony
She blinded me with science!
And hit me with technology
--Thomas Dolby
After the 1992 presidential election results came in, I remember my company's former CQO telling me that the accuracy with which the polls forecasted the election results spoke to the power of statistical sampling in estimating outcomes for a population. Seemed reasonable to a rookie like me back then.
Were I to respond to his assertion today, I would say that there is more to predicting outcomes from human subjects than a reasonable sampling plan--which is no small task in and of itself. How the questions are asked also makes a difference. For example, ex ante questions such as "Who do you plan to vote for?" has a different response profile than "Who will you vote for?" as does "Who did you vote for?" ex post.
Another issue involves the difference between what people say they will do and what they actually do. Survey respondents might say they will vote for one candidate, but actually vote for another. Or perhaps they don't vote at all. People might indicate that they do not plan to vote but subsequently head to polls and vote--perhaps for the candidate deemed least likely by other survey responses.
The +/- X% 'margin of error' numbers provided by pollsters provide an illusion of accuracy when underlying methodologies fail to compensate for the true sources of error in survey research.
This is precisely what we saw in the run up to election day. People blindly hanging their hats on forecasts provided by pollsters who offered the illusion of accuracy when, in reality and with notable exceptions, the polling methods generated estimates that were spectacularly wrong.
Why the error among pollsters was so skewed in one direction (which, fittingly, is known as 'bias' in survey research) is a subject for another day.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment