Thursday 22 December 2011

A beginner's guide to public opinion research

There is a difference between public opinion and public opinion research. First of all, the opinions of the public can be found almost everywhere, at the coffee shop, in the workplace, in letters to the editor, op-ed pieces and elsewhere. Then there are those who claim to speak on the public behalf, advocacy groups, elected officials, charities, churches, unions and so on. From radio talk shows to Internet discussion boards, there is plenty of public opinion out there on any subject you can think of. And the diversity of opinion in public discourse on these topics is astounding.

However, if you want to understand the opinion of the general public on a subject, it cannot be assumed or generated from an analysis of all these expressions of public opinion. Public opinion research must be conducted. 

Although it has been disparaged as more of an art (or as pure chicanery) rather than a science, public opinion research is the application of scientific methods of inquiry to understand the views of the public. When done properly and with intellectual rigour, research can provide an accurate understanding of the public psyche on almost any issue or set of issues (within the limits of budget and time, of course). When done haphazardly, the results vary from being a nuisance to outright nonsense. As one of my mentors once said, “Bad data is worse than no data.”

We certainly know that public opinion research is used to guide almost every major decision in the public and private sector. It is used to develop new products and policies. It is used to evaluate the commercials we see and the programs that we use. It even provides content for news stories, columnists and pundits. But what is it? How can the views of such a relatively small number of Canadians responding to a telephone survey, say a thousand or so, represent a diverse nation of 32 million souls? Why is any credibility given to the views of one hundred people, selected at random in groups of ten or so, that are expressed in two-hour focus group sessions held in five different major urban centres across Canada?

With regard to the first example, the telephone survey, the views of 1,000 people can be representative of the population if they are chosen according to a sound methodology to ensure random selection and representation across all demographic groups. The point of this kind of research is to “quantify” public opinion by using a questionnaire that is administered consistently to participants. For example, questions on voting intention are used to demonstrate, usually in percentage terms, the support for the various political parties nationally, regionally and by major demographic grouping. The data that comes from quantitative research is expressed in numeric terms (“37% say they would prefer the second option” or “social priorities obtain a mean value of 7.2 versus economic priorities at 6.7”), which enables the use of a full rage of statistical analysis on the data. The only limitation to the analysis (outside of time and budget) is the amount of data collected and the number of people participating in the survey (the sample).

The accuracy of a survey is determined by the number of people in the sample. The fewer the number of participants; the less accurate the results. In our sample of 1,000 participants, for example, the overall results will be accurate to within +/- 3.2 percent. This is the margin of error for this sample size. If we want to look at what female participants say nationally in this survey, the sub-sample will contain approximately 500 participants and will have an accuracy of +/-4.5 percent, which is still fairly good. However, if we want to look at what female participants in British Columbia have to say in this survey, data from the 65 or so participants would have an accuracy of +/-12.4 percent, which limits the utility of this data.

On the other hand, focus groups take an almost opposite approach to investigating public opinion. While quantitative research methods heavily rely on numbers, focus groups, executive interviews and other qualitative research methods reject the use of numbers or any kind of quantification. In the industry guidelines, research practitioners are strongly urged not to use percentages, fractions or numerical expressions of any kind in the reporting of these results. This is not only due to the fact that this kind of research is not statistically representative, but also because the focus of qualitative research is on the discussion with participants and how they agree or disagree to the points raised. Rather than trying to determine the amount of interest in a party, product or program, qualitative research can investigate the reasons why there is an interest (or lack thereof), receive feedback in the participants’ own words and test possible options.

In both types of public opinion research, participants are solicited for their views. This is an important distinction from the other kinds of public opinion discussed at the start of this article. Those who write letters to the editor or call a radio talk show are prompted to express their own views, usually in the negative. The motivations are diverse, but they are “selecting themselves” to contribute their opinions. This self-selection is an important part of public discourse, but it is something that is avoided in good public opinion research (which is why it does you no good to call up a research company to volunteer for focus groups).

As a general practice, a healthy scepticism is required when trying to interpret the results of any public opinion research reported in the media and elsewhere. A review of the questions used, the dates when the data was collected, and the sample size (with regard to quantitative research) will tell you a great deal about the way the data should be interpreted. Unfair questions put to an unrepresentative slice of the population, the staple of media website “polls”, will not give any useful insights and are mischievous at best. Trying to extrapolate a complex regional analysis from a meagre national sample is another common error.

We can be better consumers of public opinion research. In this regard, we should remember John Diefenbaker’s criticism from the opposition benches “They use polls the way a drunk uses a lamppost – for support rather than for illumination.”

- 30 -

Originally published in The Hill Times

1 comment:

  1. As far as i know...The first known example of an public opinion..study was a regional study performed by The Harrisburg Pennsylvanian in 1824


    Leadership questionnaire

    ReplyDelete