Tuesday 3 January 2012

Questions, questions


The famous phrase, “garbage in, garbage out,” originated in the world of computer programming and refers to the need to ensure that the program is properly and carefully coded into the machine. If you make an error in the code, the “garbage in” would result in nothing but garbage coming out when the program is run. In the world of public opinion research, this analogy applies when it comes to the wording used in survey questions. If a question is improperly worded, than the only result is “garbage out.” While it may seem easy to design survey questions, the task can prove to be far more difficult.

One of the criticisms often raised about opinion research is that questions can be designed in such a way as to elicit the result desired by the client (or the pollster). While there are some who believe that these types of questions are effective at supporting a point of view or advocating a cause, the reality is that these types of questions carry very little value when assessing the public mood. The bias is often obvious, which means that a fair and reasonable person will likely dismiss the results, even if they appear in a media story or are the focus of a media event. The money spent to collect the data from these questions is wasted from a research point of view.

It is important that the questions used in any quantitative research project, such as a telephone or Internet survey, would be seen as balanced and fair by an objective and reasonable person. Care needs to be taken not to use “loaded” or otherwise biased terms and to be sure that that the question is clear when administered over the telephone or through a computer screen. This does not mean that you must avoid provocative questions (often called “argument testing”) but it does mean that, if you are presenting an argument on one side of an issue, you should also present an argument of equal weight on the other side. This can be done as one question or as a battery of questions but an objective critic should be able to see that the question designer is presenting a balanced set of arguments.

Another consideration that needs to be taken into account when designing questions falls under a general heading of “measurement error.” This can occur when a designer sets inappropriate response categories in the question. While it would be appropriate to set ranges of “five or less” or “six or more” when inquiring about the number CDs purchased last week, these categories would not provide sensible data on the number of automobiles purchased over the same time period.

Measurement error can also be introduced when you are asking people to recall behaviours over long periods of time. People are better able to accurately recall the number of times, if any, that they used a particular government service or program over the past month than if the time period was the past five years. Again, care needs to be taken to ensure that the time periods used are appropriate to the question.   

Then there are “double-barrelled” questions. These are questions that try to cover more than one topic at a time, such as “Do you agree or disagree that the government should spend more money on military equipment and participate in the US missile defence program?” While there are some people who would agree or disagree with both actions, what about those who may want to reduce military spending while participating the missile defence program? The result of these questions is that a respondent (and the person analysing the data) is not sure which part of the question being answered. The best approach would be to split this into two separate questions and compare the responses. Sure, asking one question can be cheaper than asking two, but a “double-barrelled” question is a false economy.

Sometimes, survey participants are asked to respond to unanswerable questions. A question like “when did you stop supporting armed terror?” certainly falls into this category, but an unanswerable question usually refers to a question that is so detailed or complex that a respondent cannot provide an answer without referring to other information or, more likely, cannot be bothered to answer.

With all these concerns in mind, how does a question designer know if they have created a successful survey questionnaire? Again, the test of the objective critic or the view of the fair and reasonable person can be useful. Further, many public opinion research firms (and most Government of Canada survey research projects) subject their questionnaires to a “pre-test.” This is where the questionnaire is put to a small group of survey participants, usually 30 or so (in both languages if the questionnaire is to be used nationally). As these pre-test interviews are conducted, the survey interviewers make observations about their ability to clearly administer the questions as well as the ability of respondents to answer the questions. Respondents are more than willing to volunteer their views on questions they feel are convoluted or biased. As well, experienced interviewers can often flag questions or phrases that are difficult to administer or for respondents to understand. The comments from the pre-test are very useful to fine-tuning a questionnaire to ensure that better data is collected.

After all, “garbage in, garbage out.”

- 30 -
 
Originally published in the Hill Times

No comments:

Post a Comment