Reporting on Polls
A very timely article in the NZ Herald which suggests the problems with polls is not how they were conducted, but the way they were reported.
I have an obvious bias as I own a polling company, but I have long advocated that this is an issue.
In fact when NZ First had a bill to ban publishing opinion polls during election campaigns, I suggested to some of the media, that a reporting code for polls would be a useful compromise.
If we had such a code, here is what I would include in the draft:
1) Reserve the term “poll” for scientific random samples. 0900 phone ‘polls’ and other devices which depend on reader/viewer response should never be called polls but surveys.
2) Always include in the story the level of undecided respondents. One of the most important thing in political polls is seeing who goes in and out of being undecided. Sometimes a party can go from say 35% to 38% without any more people saying they support them, because another party has had some of its previous supporters now say they are undecided.
3) Publish not only the margin of error for the poll, but if you report on any demographic breakdowns, include the margin of error for them. The moe for a 1,000 string poll is 3.2% but for say the Wellingtonians in that poll, it is 10.0%.
4) Always include the time period it was conducted over.
5) Always include somewhere the actual question asked.
6) If the media organisation has a website, include a link to the full poll report from the polling company.
There’s a lot more than that one could do, but it is unrealistic to expect media stories to include the full disclaimers etc a full poll report normally has.
Personally I would also like to ban media from declaring a 1% change in a party’s vote is significant, but that is getting too subjective.
In another article on polling, Janet Hoek also slams reporting of minor parties as being under the margin of error. I have blogged about stupidity of this label on half a dozen occassions.