Why pilot questionnaires?

There are two key tests for a questionnaire: reliability and validity. A questionnaire is reliable if it provides a consistent distribution of responses from the same survey universe. The validity of the question­naire is whether or not it is measuring what we want it to measure.

Testing a questionnaire directly for reliability is very difficult. It can be administered twice to the same sample of test respondents to determine whether or not they give consistent answers. However, the time between the two interviews cannot usually be very long, both because the respon­dent’s answers may in fact change over time and because, to be of value to the researcher, the results are usually required fairly quickly. The short period causes further problems in that respondents may have learnt from the first interview and as a result may alter their responses in the second one. Conversely, they may realize that they are being asked the same questions and deliberately try to be consistent with their answers. In testing for reliability we are therefore often asking whether respondents understand the questions and can answer them meaningfully.

Testing a questionnaire for validity requires that we ask whether the questions posed adequately address the objectives of the study. This should include whether or not the manner in which answers are recorded is appropriate.

In addition, questionnaires should be tested to ensure that there are no errors in them. With timescales to produce questionnaires sometimes very tight, there is often a real danger of errors.

Piloting the questionnaire can thus be divided into three areas: reliabil­ity, validity and error testing.

1. Reliability

  • Do the questions sound right? It is surprising how often a question looks acceptable when written on paper but sounds false, stilted or simply silly when read out. It can be a salutary experience for ques­tionnaire writers to conduct interviews themselves. They should note how often they want to paraphrase a question that they have written to make it sound more natural.
  • Do the interviewers understand the questions? Complicated wording in a question can make it incomprehensible even to the interviewers. If they cannot understand it there is little chance that respondents will.
  • Do respondents understand the questions? It is easy for technical terminology and jargon to creep into questions, so we need to ensure that it is eliminated.
  • Have we included any ambiguous questions, double-barrelled ques­tions, loaded or leading questions?
  • Does the interview retain the attention and interest of respondents throughout? If attention is lost or wavers, then the quality of the data may be in doubt. Changes may be required in order to retain the respondents’ interest.
  • Can the interviewers or respondents understand the routeing instruc­tions in the questionnaire? Particularly with paper questionnaires, we should check that the routeing instructions can be understood by the interviewers, or if self-completion, by respondents.
  • Does the interview flow properly? The questionnaire should be conducting a conversation with the respondent. A questionnaire that unfolds in a logical sequence, with a minimum of jumps between apparently unrelated topics, helps to achieve that.

2. Validity

  • Can respondents answer the questions? We must ensure that we ask questions to which they are capable of providing answers.
  • Are the response codes provided sufficient? Missing response codes can lead to answers being forced to fit into the codes provided, or to large numbers of ‘other’ answers.
  • Do the response codes provide sufficient discrimination? If most respondents give the same answer, then the pre-codes provided may need to be reviewed to see how the discrimination can be improved, and if that cannot be achieved, queries should be raised regarding the value of including the question.
  • Do the questions and the responses answer the brief? We should by this time be reasonably certain that the questions we think we are asking meet the brief, but we need to ensure that the answers which respondents give to those questions are the responses to the questions that we think we are asking.

3. Error testing

  • Have mistakes been made? Despite all the procedures that most research companies have in place to check questionnaires before they go live, mistakes do occasionally still get through. It is often the small mistakes that go unnoticed, but these may have a dramatic effect on the meaning of a question or on the routeing between questions. Imagine the effect of inadvertently omitting the word ‘not’ from a question.
  • Does the routeing work? Although this should have been comprehen­sively checked, illogical routeing sequences sometimes only become apparent with live interviews.
  • Does the technology work? If unusual or untried technology is being used, perhaps as an interactive element or for displaying prompts, this should be checked in the field. It may work perfectly well in the office but field conditions are sometimes different, and a hiatus in the interview caused by slow working or malfunctioning technology can lose respondents.
  • How long does the interview take? Most surveys will be budgeted for the interview to take a certain length of time. The number of inter­viewers allocated to the project will be calculated partly on the length of the interview, and they will be paid accordingly. Assumptions will also have been made about respondent cooperation based on the time taken to complete the interview. The study can run into serious timing and budgetary difficulties, and may be impossible to complete if the interview is longer than allowed for. Being shorter than allowed for does not usually present such problems, but may lead to wasteful use of interviewer resources.

Source: Brace Ian (2018), Questionnaire Design: How to Plan, Structure and Write Survey Material for Effective Market Research, Kogan Page; 4th edition.

Leave a Reply

Your email address will not be published. Required fields are marked *