I had posted in September of 2012 in my post The Media’s Error on the Margin of Error of Polls, http://williamcinfici.blogspot.com/2012/09/the-medias-error-on-margin-of-error-of.html, about how most of the media and many other political observers often mislead the public by interpreting the margin of error of public opinion polls to refer only to the spread between the percentages for responses to poll questions, instead of as a figure that applies to the percentages for each response.
As a result of this erroneous premise, the media and others are making even more conclusions that are not scientifically proven. To compound their errors, they are drawing conclusions that ignore even their own erroneously-inadequate understanding. I have identified three examples that have especially arisen lately.
The first error is the suggestion that increases or decreases of percentages for responses to polling questions from one poll to a subsequent poll for responses that are less than the amount of the margin of error necessarily represent increases or decreases in support for that particular response. As the percentage of each response represents a span that is plus or minus the margin of error, any change above or below this figure that is less than the margin of error thus remains within that span, which means the difference from one poll to a subsequent one does not necessarily represent any statistically-significant change. For example, if a response in one poll registers 50% and the margin of error is plus or minus 3%, then the accuracy of the response may be anywhere from 47% to 53%. Therefore, it would be incorrect to describe the percentage of the response in a subsequent poll as having “increased” to 52% or “decreased” to 48%, as these figures were within the span of the response for the original poll and thus do not necessarily represent any upward or downward movement whatsoever.
The second error is similar. It is the description of an order of preferences of responses without regard to the margin of error. The media is frequently reporting such orders, even when they fall between the margin of error, which is inconsistent with the media’s inadequate understanding that the margin of error applies to the spread between responses. However, as I noted above, this understanding is only half right. For example, if the percentage of one response to a poll question is 25%, another 15% and a third 13% and the margin of error is plus or minus 3%, then one may conclude that the response of 25% is the first in order of preference, but that the other two responses are statistically tied with each other, in contrast to the frequent media reporting that the response for 15% is second and 13% third in order of preference. In this example, even the media’s only half-correct understanding of the margin of error ought to prevent it from concluding a definitive order of preference for statistical dead heats, but there are really more such statistical ties than it acknowledges. For example, if the percentage of the first response is 25%, another 14%, a third 10% and a fourth 9% and the margin of error is plus or minus 3%, then all of the responses from 9-14% are within the margin of error, as they can range from 11-17%, 7-13% and 6-12%, respectively, which all overlap with each other and are thus all statistically tied.
The third error leads the media to make unfair judgments based upon percentages for responses less than the margin of error, such as excluding a candidate from participation in a debate who registered a preference in a poll of less than one percent (or less than one half of a percent, if such a response is rounded up to one percent). No conclusions ought to be drawn for percentages for responses in any poll less than the poll’s margin of error. For example, if the margin of error is plus or minus three percent, then a response of a preference for a candidate of one percent (or, one half of one percent, if rounded up) may really be as much as four percent (or four and a half percent, if rounded up). Conversely, a candidate may register as much as 3% in a poll who may really be the preference of as few as 0% of those responding to the poll questions. Therefore, a media policy intended to exclude any candidates who are preferred by less than one percent of those responding, rounded up, ought only to exclude those who are preferred by less than four and a half percent of responders. It is disturbing that professional media outlets, political parties or even candidates can accept unscientific conclusions about any figure in a poll that is less than the margin of error.
The professional media portrays itself as an authoritatively-accurate source of information, despite its notoriousness for bias, inaccuracy, and even grammatical, punctuation and pronunciation errors, while arrogantly dismissing rival media and especially non-professional journalists as inferior. It offers its conclusions from polling data as if they are scientifically-proven to be certain and then blames the pollsters if the results do not match the professional media’s conclusions, even if the results were really in accord with the science of polling.
Knowing how to read polls and understanding their scientific accuracy is a useful skill, but understanding the many ways in which the professional media is wrong is even more useful.