An open-ended question is an open question where the response is recorded verbatim. An open-ended question is nearly always also an open question. (It would be wasteful to record yes-no answers verbatim.) Open-ended questions are also known as ‘unstructured’ or ‘free- response’ questions.
Open-ended questions are used for a number of reasons:
- The researcher cannot predict what the responses might be, or it is dangerous to do so. Questions about what is liked or disliked about a product or service should always be open-ended, as it would be presumptuous to assume what people might like or dislike by having a list of pre-codes.
- We wish to know the precise phraseology that people use to respond to the question. We may be able to predict the general sense of the response but wish to know the terminology that people use.
- We may wish to quote some verbatim responses in the report or presentation to illustrate something such as the strength of feeling that respondents feel. In response to the question ‘Why will you not use that company again?’, a respondent may write in: ‘They were — awful. They mucked me about for months, didn’t respond to my letters and when they did they could never get anything right. I shall never use them again.’ Had pre-codes been given on the questionnaire this might simply have been recorded as ‘Poor service’. The verbatim response provides much richer information to the end user of the research.
- Through analysis of the verbatim responses, clients can determine if the customer is talking about a business process, a policy issue, a people issue (especially in service delivery surveys), etc. This enables them to determine the extent of any challenges they will face when reporting the findings of the survey to their management.
Common uses for open-ended questions include:
- likes and dislikes of a product, concept, advertisement, etc;
- spontaneous descriptions of product images;
- spontaneous descriptions of the content of advertisements;
- reasons for choice of product/store/service provider;
- why certain actions were taken or not taken;
- what improvements or changes respondents would like to see.
These are all directive questions, aimed at eliciting a specific type of response to a defined issue. In addition, non-directive questions can be asked, such as what, if anything, comes to mind when the respondent is shown a visual prompt, and whether there is anything else that the respondent wants to say on the subject. Questions that ask ‘What?’ or ‘Why?’ or ‘How?’, or for likes and dislikes, will commonly be open- ended.
Open-ended questions are easy to ask but suffer from several drawbacks:
- In interviewer-administered surveys they are subject to error in the way and the detail with which the interviewer records the answer.
- Respondents frequently find it difficult both to recognize and to articulate how they feel. This is particularly true of negative feelings, so that asking open-ended questions about what people dislike about something tends to generate a high level of ‘Nothing’ or ‘Don’t know’ responses.
- Without the clues given by an answer list, respondents sometimes misunderstand the question or answer the question that they want to answer rather than the one on the questionnaire.
- Analysing the responses can be a difficult, time-consuming and relatively expensive process.
In addition, some commentators (Peterson, 2000) see verbosity of respondents as a problem with open-ended questions. It is argued that if one respondent says only one thing that he or she likes about a product, but another says six things, then the latter respondent will be given six times the weight of the former in the analysis. To even this up, only the first response of the more verbose respondent is counted. In practice, interviewers are trained to extract as much detail as possible from respondents at open-ended questions. The objective is to identify the full range of responses given by all respondents and to determine the proportion of the sample that agrees with each of them.
To analyse the responses, a procedure known as ‘coding’ is used. Manual coding requires a sample of the answers to be examined and the answers grouped under commonly occurring themes, usually known as a ‘code frame’. If the coder is someone other than the researcher, then that list of themes needs to be discussed with the researcher to see whether it meets the researcher’s needs. The coder may have grouped answers relating to low price and to value for money together as a single theme, but the researcher may see them as distinct issues and want them separated. The researcher may be looking for specific responses to occur that have not arisen in the sample of answers listed. It may be important for the researcher to know that few people mention this, but in order to be sure that this is the case, the theme must be included on the code frame. When the list of themes has been agreed, each theme is allocated a code, and all questionnaires are then inspected and coded according to the themes within each respondent’s answer.
Manual coding is a slow and labour-intensive activity, particularly when there is a large sample size and the questionnaire contains many open-ended questions. Most research agencies will include a limit to the number of open-ended questions in their quote for a project, because it is such a significant variable in the costing.
There are a number of computerized coding systems available, which are increasingly used by research companies. These reduce but do not eliminate the human input required, and so make some cost savings.
1. Probing
With most open questions it is important to extract from respondents as much information as they can provide. The first reason they give for having bought that brand may be the same for all brands and will not discriminate. Although it is the first that comes to mind, it may not be the one in which the researcher is most interested. First responses given to open questions are often very bland, and non-directional probing is required to try to fill out the answer.
Probing is very different from prompting, and the two must not be confused. In prompting, respondents are given a number of possible answers from which to choose, or are given clues to the answers through visual or picture prompts. Probing makes no suggestions regarding answers to the respondent. A typical probe with instructions is:
‘What else did you like about the product?’ PAUSE. THEN PROBE:
‘What else?’ CONTINUE UNTIL NO FURTHER ANSWERS.
The object here is to keep respondents talking in reply to the initial question in their own words until there is no more that they can or wish to say. They are not led in any direction.
Do not use phrases such as ‘Is there anything else?’ as a probe. That form of probe allows or even encourages the respondents to say ‘No, nothing else.’ If the probe is ‘What else?’, this makes a presumption that there is more that the respondent wants to say and puts the onus on the respondent to indicate that he or she has no more to say. This helps the researcher to obtain the fullest answer rather than helping the respondent to say as little as possible.
It is occasionally possible to anticipate unhelpful answers and ask for these specific responses to be elaborated. A common example is when respondents give ‘convenience’ as an answer to why they use a particular shop or travel by a particular type of transport. This is a common answer given to this type of question, but is frustratingly unhelpful. Where it is anticipated that this will occur, an instruction may be given to interviewers to probe for more information regarding in what way it was convenient, and what ‘convenience’ means to the respondent.
Source: Brace Ian (2018), Questionnaire Design: How to Plan, Structure and Write Survey Material for Effective Market Research, Kogan Page; 4th edition.
Attractive section of content. I just stumbled upon your site
and in accession capital to assert that I acquire in fact enjoyed account your blog posts.
Anyway I’ll be subscribing to your feeds and even I achievement you access consistently fast.