1. Pre-coded open questions
Frequently with interviewer-administered surveys, a list of pre-codes is provided with open questions for the interviewer’s use. This may simply be a brand list on which to code the response to a question such as ‘Which brand of breakfast cereal did you eat today?’ or it may be used in order to categorize more complex responses (see Figure 4.2).
This requires the questionnaire writer to second-guess what the range of responses is going to be. It is usually done to save time and the cost of coding open-ended verbatim responses. This approach might also be used to try to provide some consistency of response by forcing the open responses into a limited number of options. It is important that there is always a space provided for the respondent or interviewer to write in answers that are not covered by the pre-codes. It is unlikely that the questionnaire writer will have thought of every possible response that will be given, and it is not unusual for quite large proportions of the responses to be written in as ‘other answers’. However, there is still a danger that respondents or interviewers will try to force responses into one of the codes given rather than write in a response that is close to, but does not quite fit, one of the pre-codes.
The richness and illustrative power of the verbatim answer is lost by providing pre-codes, as are any subtle distinctions between responses, but the processing time and cost will be reduced. Consistency with other surveys may also be increased.
The code list may be based on qualitative research that has suggested the range of answers that could be expected or on the results of previous studies.
2. Pre-coded closed questions
Closed questions will tend to be pre-coded. Either a prompt list of possible answers is used or there is a known and finite number of responses that can be given. These are provided on a code list for the interviewer or the respondent to select. There is little point in not providing such a list and requiring the answers to be written in, with the consequent cost and time of having to code the responses.
3. Dichotomous questions
The simplest of closed questions are dichotomous questions, which have only two possible answers:
‘Have you drunk any beer in the last 24 hours?’
It is possible that respondents could refuse to answer or say that they ‘Don’t know’.
Dichotomous questions such as this are easy to write and easy to ask. Complex pieces of information can often be broken down into a series of dichotomous questions that respondents can be led through, with a greater expectation of accuracy than would be achieved with a single question.
‘Have you bought a bicycle in the last 12 months as a present for a child in your family that cost over £200?’
Is more easily asked, and understood as:
‘Have you bought a bicycle in the last 12 months?’
‘Was it for your own use or for someone else’s?’
IF SOMEONE ELSE’S:
‘Was that other person a child?’
IF A CHILD:
‘Is that child a member of your family?’
IF MEMBER OF THE FAMILY:
‘Did it cost £200 or more, or less than £200?’
As can be seen, additional information is also picked up along the way. When the questioning is through a single question, we can only determine the penetration of the defined group. By breaking the questions down we can also determine the penetration of bicycle purchasers and whether for self or as a gift. This is information that may be capable of being checked against other sources to establish the accuracy of the sample, or it may be new information, not previously available.
However, care must be taken that the question really is dichotomous. Consider the question ‘Will you buy a new bicycle in the next six months?’ This may appear to be dichotomous, capable of being answered ‘yes’ or ‘no’. But if they were the only answers offered it would result in a high proportion of ‘Don’t know’ answers because future behaviour is unpredictable. Some respondents will be certain that they will not buy a bicycle in the next six months; others will be certain that they will. Others, though, will not be sure. They may think that there is a possibility that they will, but have not been given this option as an answer.
The real question here is about current expectations or intentions. It could therefore be asked as: ‘At the moment, do you intend (or expect) to buy a new bicycle in the next six months?’ This could now be treated as a dichotomous question, but is still probably better asked as a scale, from ‘Definitely will buy’ to ‘Definitely will not’, encompassing less certain positions along the way. This would allow respondents to express better their true uncertainty regarding their future behaviour (see Chapter 5).
4. Multiple choice
Closed questions with more than one possible answer are known as multiple choice (or multi-chotomous) questions. Such a question might be: ‘Which brand or brands of beer on this list have you drunk in the last seven days?’ Clearly, there is a finite number of answers; the range of possible answers is predictable; and the question does not require respondents to say anything ‘in their own words’. By defining the brands of interest the questionnaire has made this a closed question.
5. ‘Don’t know’ responses
Questionnaire writers are often unsure as to whether they should include a ‘Don’t know’ response to pre-coded questions. With inter viewer- administered questionnaires, it is argued, the inclusion of ‘Don’t know’ legitimizes it as a response. If it is not on the questionnaire, the interviewer will be more likely to probe for a response that is on the pre-code list before having to write in that the respondent is unable or unwilling to answer the question.
‘Don’t know’ can be a legitimate response to many questions where the respondent genuinely does not know the answer, and there should be no difficulty in identifying questions where a ‘Don’t know’ code must be included:
- ‘Which mobile phone service does your partner subscribe to?’
- ‘When was your house last repainted?’
- ‘From which store was the jar of coffee bought?’
With other questions, though, it is not always so clear. These tend to be questions either of opinion, where a likelihood of action is sought, or of recent behaviour, which the respondent could be expected to remember:
- ‘Where in the house would you be most likely to use this air freshener?’
- ‘What method of transport did you use to get here today?’
- ‘Which brand of tomato soup did you buy most recently?’
A good reason for having a ‘Don’t know’ code on interviewer-administered paper questionnaires is that without it the response may be left blank. The researcher cannot then be sure that the question was asked. Knowing that the respondent could not or would not answer the question gives a positive assurance to the researcher that the interview was administered correctly.
This can also provide important information about the knowledge of respondents and their ability to answer this question. Isolated responses of this type might indicate that those respondents were not recruited correctly to the desired criteria. Widespread responses of this type might indicate that the information asked is beyond the scope of this research universe (eg asking post room managers in businesses about the size of the company’s stationery bill) or that the question is poorly worded and not understood by many of the respondents. This is generally information worth knowing and should encourage the inclusion of ‘Don’t know’ codes on the questionnaire.
Bias can be introduced under certain circumstances if there is no ‘Don’t know’ code. For example, if a brand name is asked for it is more likely that the brand leader (or best-known brand if that is different) will be the one that comes to mind first, or will be the one that respondents guess that they are most likely to have bought recently. Less-well-known brands may get under-represented, so a bias has been introduced through the lack of a ‘Don’t know’ code.
With CAPI and CATI questionnaires it is usual to provide a ‘Don’t know’ code for most questions, as, without being able to record that, it may not be possible to move on to the next question.
With self-completion questionnaires, the provision of a ‘Don’t know’ code has to be considered question by question. Such a code on every question may indeed encourage respondents not to think sufficiently about their response, and if there is any uncertainty, to answer ‘Don’t know’. It is prudent, therefore, to limit the use of ‘Don’t know’ categories to those questions where the researcher believes it to be a genuine response. With web-based self-completion questionnaires there are other issues regarding not encouraging respondents to give ‘Don’t know’ as an answer, while enabling them to continue to the next question. These issues are considered as a matter of questionnaire layout in Chapter 8.
Source: Brace Ian (2018), Questionnaire Design: How to Plan, Structure and Write Survey Material for Effective Market Research, Kogan Page; 4th edition.