
Take care with your sample
“Recommended by 93% of ‘Red’ readers”
The Advertising Standards Authority (ASA) banned a TV advert for hair product Nice ’n Easy in 2009. The advert included a voiceover that said “93% of ‘Red’ magazine readers would recommend Nice ’n Easy to a friend. The other 7% probably don’t have any friends.”
Afterwards, some text flashed up on the screen: “Participants in a survey of 245 ‘Red’ magazine readers, April 2008”. The problem wasn’t so much the small sample (though a professional opinion survey typically covers at least 1,000 people) as the way the data were collected. Participants had volunteered to take part in the survey and were sent a pack of the hair dye, as well as the survey (which included a question about whether they would recommend the product to a friend). If they returned the survey, along with a photo of themselves and a short story, they had a chance of winning a trip to New York.
The ASA banned the advert because they believed the claim of 93 per cent was misleading. It said: “We were concerned that the...respondents entering the competition were selected on the merits of their competition entry [the short story] and may have been inclined to be less than impartial in their survey responses in order to stand a better chance of winning.”
Lead image:iStockphoto
Questions for discussion
- How many of a magazine’s readers should you ask before you can make claims about the opinions of its readership?
- Is it OK to provide incentives to people taking part in surveys? Will you get the same results, or will the incentive skew the findings?
- Is there any guarantee that the people filling in the survey actually tried the hair dye they were sent? How could it have been done differently?