Types of pilot questionnaire survey

There are various types of pilot surveys that might be carried out according to the perceived need for piloting, time available and budget. These are:

  • informal pilots carried out with a small number of colleagues;
  • cognitive interviewing in which the questionnaire is tested amongst respondents;
  • accompanied interviewing which may be used principally to test for interviewer and routeing errors;
  • large-scale pilot studies where a larger number of interviews can be used to test for completeness of brand lists or incidence of sub-groups;
  • dynamic pilots, where question wording is changed between inter­views to test alternatives based on responses received.

1. Informal pilot

An informal pilot represents the minimum that any questionnaire should undergo. In the informal pilot, the questionnaire writer should carry out the interview with a number of colleagues. At the minimum, this will give an indication of the length of time taken to complete the interview. It must be remembered though that an interview undertaken in the calm conditions of an office will usually take less time than one in the field when the respondent may be subject to a number of distractions and interruptions. Because colleagues are familiar with the conventions of questionnaires and they know it is not a ‘real’ interview, they will also tend to answer more quickly and without the same pauses for thought that occur with respondents.

Ideally, the colleagues interviewed should meet the eligibility criteria for the study, so that they can answer as respondents. This may highlight incomplete sets of pre-codes when a colleague’s responses don’t fit those provided, or an inadequacy in the routeing or in the questions when key information is not elicited.

If colleagues do not fit the eligibility criteria, then they must be asked to pretend to. This is less likely to identify problems such as incomplete code lists, as the pretend respondent, who may not know the market well, will tend to give the same sorts of responses that the questionnaire writer has already anticipated. Nevertheless, this type of interview may well identify issues of timing, wording and routeing errors.

It is often worthwhile asking a colleague to pretend to be someone in the market with particular characteristics or a particular minority pattern of behaviour. If there is complex routeing in the questionnaire, this approach can be used to test it. If the colleague can be as obstructive as possible, challenging questions and providing the most difficult responses that he or she can think of, this will give the questionnaire a further test. Remember that the questionnaire has to work not just for most respondents but for all respondents.

The questionnaire writer should conduct these interviews, and it may be that no more than two or three such interviews are required. The ques­tionnaire writer is the best person to understand the intent of each ques­tion and therefore to identify if it is misunderstood. However, if possible, a colleague who has not been involved in the questionnaire design can also be used as an interviewer. This will give the questionnaire some degree of testing as a tool to be used by someone not familiar with it.

Colleagues may not be thought to be the ideal sample for testing ques­tionnaires, but it has been shown that people with a knowledge of ques­tionnaire design are more likely to pick up errors in questions than are people who are not (Diamantopolous, Schlegelmilch and Reynolds, 1994), so they are good place to start.

Self-completion questionnaires should be given to a small number of colleagues to complete. These colleagues should be asked to make notes about any questions or routeing instructions with which they have difficulty.

2. Cognitive testing

Testing a questionnaire amongst colleagues may identify some issues with it, but cannot properly replicate what will happen in the field with real respondents, nor their understanding of the questions nor their thought processes when answering. To test these requires specific pre-test interviews to be carried out with a number of respondents who fall into the survey population. This can be done in focus groups but is more usually carried out in one-to-one interviews. These interviews can be carried out by the researchers themselves, who have a good knowledge of the subject and the questionnaire; cognitive psychologists, who have a good understanding of the processes of cognition; or specially trained senior interviewers who have expertise in this area.

As these interviews proceed, the interviewers talk to the respondents to find out what they understood by certain questions or why they responded as they did. The researchers should make notes throughout the interview of points that they wish to return to.

It is also possible to ask the respondents to ‘think out loud’ as they answer the questions, and so give a running commentary on their thought processes. What the interviewer is aiming to achieve, based on models put forward by Tourangeau (1984) and Eisenhower, Mathiowetz and Morganstein (1991), is to determine whether respondents:

  • have a memory of what is being asked about and hence the ability to answer the question (encoding in memory);
  • understand the question (comprehension);
  • can access the relevant information in their memory (retrieval);
  • can assess the relevance to the question of what they retrieve (judge­ment);
  • can provide answers that meet the categories provided, and decide whether they want to provide an answer, or whether they want to provide a socially acceptable answer (communication/response).

One question always worth asking is whether the respondents felt that the questionnaire allowed them to say all that they wanted to say on the subject. It is not uncommon to find that one of the main things that the respondent wanted to say was not asked about. It may not have arisen because it was not seen as relevant to the objectives of the study. Nevertheless, the impression left with the respondent is that the study was incomplete and that decisions would be made without full knowl­edge of the facts. This perception can be damaging to the image and repu­tation of market research, and could affect the willingness of the respondent to take part in future surveys. If there is an issue that consis­tently comes through as important to respondents but that is not asked about, then consideration should be given to including it in the interview regardless of its apparent relevance to the study objectives.

Using cognitive testing of this nature can reveal a range of difficulties with the questionnaire. In a cognitive test of a questionnaire associated with the US Current Population Survey (McKay and de la Puente, 1996) problems were identified with:

  • sensitive questions that respondents were uncomfortable answering;
  • abstract questions that respondents found difficult to understand and to answer;
  • vocabulary problems where the questionnaire writers had used terms unfamiliar to some of the respondents;
  • order effects in which responses changed depending on the order in which questions were asked.

After the questionnaire had been revised, further testing identified other confusing and redundant questions.

Respondents should be chosen to represent a broad range of the types of people to be included in the main study. Any particular sub-groups whose members might experience some difficulties with the question­naire should be represented.

Questionnaire writers should also conduct some interviews themselves in order to be able to understand any difficulties that the interviewers might have with following the questionnaire instructions or in reading out the words of the questions as they have been written.

This type of pilot survey should allow the researcher to amend the questionnaire so that there can be confidence that it works in asking respondents questions that they can understand and can cope with the answers that they give.

Self-completion questionnaires, either paper or electronic, can be tested by asking a small number of eligible respondents to complete a question­naire, and then talking them though what they understood from the questions and the way in which they responded to them.

3. Accompanied interviewing

A possible further stage of piloting face-to-face or telephone interviews is for the researcher to accompany or listen in to interviews carried out by regular members of the interviewing force.

The questionnaire writer should be listening for:

  • mistakes by the interviewer in reading the questions;
  • mistakes by the interviewer in following routeing instructions;
  • errors in the routeing instructions that take the respondent to the wrong question.

If it has not been possible to carry out a proper cognitive test, this approach can be combined with interviewing the respondents in order to test the question. However, this can sometimes cause conflict in the

approach of the researcher due to the multiple objectives of testing both the way in which the interviewer handles the questionnaire and the way in which the respondents understand and answer the questions.

4. Large-scale pilot survey

With completion of the small-scale pilot survey, it may be possible to move to a larger-scale exercise. The objective here is to extend the pilot exercise to a larger number of interviewers and to a broader range of respondents, and for there to be a sufficient number of respondents for some analysis to be carried out to confirm that the questions asked are delivering the data required to answer the project objectives.

Some commentators suggest that the interviewers used should be the most experienced interviewers available, who are capable of determining ambiguities and other errors in the questions. Others suggest that a mix of interviewer ability is more appropriate, as it reflects the ability range of interviewers likely to be used on the main study. This range of views suggests that the principal purpose of the pilot study should be deter­mined and the interviewers chosen accordingly. Thus if the interview is straightforward in terms of routeing and instructions, and the focus of the pilot is more on the wording of the questions, more experienced inter­viewers may be more appropriate. If the focus, however, is equally on how well the interviewers can cope with a complex questionnaire, then a range of abilities would appear to answer the needs better.

This type of large-scale pilot is likely only to be carried out with large- scale studies, where the cost of failure is high if the study is unable to meet its objectives.

Upwards of 50 interviews may be carried out in this pilot, which should be designed to cover different sections of the market and possibly different geographical regions. It is at this stage that small regional brands may be discovered that should be added to brand lists, or unantic­ipated minority behaviour that had not been catered for. (The small-scale pilot survey is only likely to clarify anticipated minority behaviour.)

It is at this stage that unusually high numbers of ‘Don’t know’ or ‘Not answered’ responses may indicate an issue with a question.

The questionnaire writer is unlikely to be able to be present at all of the interviews. Indeed, doing so could be counterproductive, as it would be difficult not to give guidance to an interviewer consistently making an error. Interviewers should therefore be asked to write notes on each inter­view. They should be provided with note sheets on which to record comments – their own and the respondents’ – as they go through the interview, which can be referred to later.

A debriefing of the interviewers should be held if possible, where they are brought to a central location to discuss their experiences with the questionnaire. The questionnaire writer should have seen all of the completed questionnaires before the debrief so as to have determined where there might still be issues with some questions, including issues that the interviewers themselves might not be aware of. If, for example, they all consistently misinterpret a question, they are unlikely to identify that as a problem. It will require the questionnaire writer to do so.

Should significant changes be made to the questionnaire as a result of the pilot testing, then, of course, another round of pilot testing should be carried out.

Although not part of the questionnaire development process, a further use to which the large-scale pilot survey can be put is to give an indica­tion of the incidence of minority groups within the research universe. If it is intended that the study should be capable of analysing specific sub­groups, the incidence of which is unknown, the pilot sample can give a first indication of this and so suggest whether the intended sample size of the main study is sufficient for this intended analysis. This may lead to revision of the sample size or sample structure for the main survey.

5. Dynamic pilot

The dynamic pilot is a type of pilot exercise that can be very useful where a questionnaire is experimental. This is similar in scale to the small pilot survey. However, instead of the questionnaire writer listening in to a number of interviews and then deciding what is and is not working, the questionnaire is reviewed after each interview and rewritten to try to improve it. The client and researcher will often do this together. The improved questionnaire is then used for the next interview, after which it is reviewed again.

This is a time-consuming and possibly costly process, particularly if a central location has to be hired to accommodate it. However, where there is real concern about the sequence of questions or the precise wording of questions, it can be the quickest way to achieving a questionnaire that works, particularly if the client is part of the dynamic decision-making process.

An example of where this might be appropriate is if we wish to test the reaction to a complex proposed government policy. In this situation, it may be important to ensure that respondents understand some of the detail of the policy. A key component of the questionnaire design would be how to explain a number of different elements of the policy and gain reaction to each one. So we may need to test the wording of the descrip­tions of the different elements in order to judge how clearly it correctly conveys the policy; and to assess any order effects dependent on the sequence in which the components are revealed. By observing the reac­tion of the pilot respondents and where necessary asking them questions regarding what they understand from the descriptions, the questionnaire writer can adjust the wording and the order of the questions between interviews until a satisfactory conclusion is reached.

It is rare for all of these techniques to be used in a project. However, it is important that at least one type of questionnaire testing should always be carried out.

Source: Brace Ian (2018), Questionnaire Design: How to Plan, Structure and Write Survey Material for Effective Market Research, Kogan Page; 4th edition.

Leave a Reply

Your email address will not be published. Required fields are marked *