Dealing with SDB (Social desirability bias) of questionnaire survey

When writing the questionnaire care must be taken to identify question areas that are possible sources of SDB. If the questions ask about attitudes or behaviour on any subject that has a social responsibility component, then consideration should be given to how best to minimize any possible bias. Simply asking respondents to be honest has very little effect (Phillips and Clancy, 1972; Brown, Copeland and Millward, 1973).

Research carried out under the MRS or ESOMAR or CASRO code of conduct should anyway tell respondents that their responses will be treated confidentially. This could be reinforced with a restatement of confidentiality as part of the introduction to the sensitive questions. However, the effect of this appears to be slight (Singer, Von Thurn and Miller, 1995; Dillman et al, 1996) or even to reduce the level of cooperation (Singer, Hippler and Schwarz, 1992). This reduction in cooperation could be because the additional emphasis on confidentiality highlights to respondents that the questions are particularly sensitive, and so increases their nervousness about answering them. And, except for self-completion surveys, there is still the interviewer, who will be aware of the responses. Appealing for honesty and assurances of confidentiality are insufficient. Measures that are more positive are therefore required.

1. Removing the interviewer

With face management, respondents are trying to create an impression that they are more socially responsible than they already are. They may be trying to create that impression for the interviewer or for the unseen researcher. Many respondents will not appreciate that their responses are likely to be seen at an identifiable level by only the interviewer and, if using a paper questionnaire, by the person entering or editing the data. That may not matter in the sense that they just want to be ‘known’ as responsible people. However, the most obvious person for whom they want to create a good impression is the interviewer. Using a self­completion questionnaire, by removing the interviewer from the inter­face, should therefore eliminate much, but probably not all, of this particular problem. However, it will not eliminate ego defence/self- deception or instrumentation. Earlier work published on this topic (Lautenschlager and Flaherty, 1990; Booth-Kewley, Edwards and Rosenfeld, 1992) had been inconclusive regarding whether removing the interviewer reduces SDB. More recently Poynter and Comley (2003), Duffy et al (2005) and Bronner and Kuijlen (2007) have all demonstrated that the admission of socially undesirable behaviour is greater with online surveys than with interviewer-administered surveys, so demon­strating the greater honesty that is achieved with this medium. In addi­tion, Kellner (2004) demonstrated that there was less pressure on respondents to appear knowledgeable.

Self-completion questionnaires are also good to use where the subject is potentially embarrassing for the respondent, and they eliminate much of the bias that would otherwise occur. Both mail surveys and internet-based surveys benefit in this respect, with internet-based surveys possibly being seen by respondents as the most anonymous form of interview.

2. Random response technique

The randomized response technique was first developed by Warner (1965). It provides a mechanism for respondents to be truthful about embarrassing or even illegal acts without anyone being able to identify that they have admitted to such an act.

This is achieved because the respondent is presented with two alterna­tive questions, one of which is sensitive and the other not sensitive. No one other than the respondent knows which question has been answered.

To achieve this, two questions with the same set of response codes are presented for self-completion. One of these is the sensitive or threatening question, and the other is the non-threatening and innocuous one. Respondents are allocated to answer one of these questions in a random way, the outcome of which is unknown to the interviewer. This can be by having balls of two different colours in a bag and asking the respondent to draw one out without showing it to the interviewer, or tossing a coin out of sight of the interviewer. However, this can be a cumbersome process in most interview situations.

An alternative method, which would also work in online self-comple­tion interviews, is presented in Figure 12.1. We know from other sources that 17 per cent of the population have their birthday in November or December and, given a sufficiently large sample, we can reasonably apply this proportion.

So, of a sample of 1,000, it can be assumed that 830 will have answered the threatening question and 170 the non-threatening question. Of the 170, half (85) will have answered ‘Yes’ to the question about their tele­phone number.

If X out of the total sample have answered ‘Yes’ at all, we can deduce that, of the people who answered the threatening question, X – 85 answered ‘Yes’ to the threatening question. We can therefore arrive at an estimate of the proportion of the population who have used marijuana in the last 12 months, which is (X – 85)/830.

It is a risky assumption that respondents are honest, both about which question they choose to answer and about the way in which they answer the threatening question. If people wish to avoid answering the threatening question, they only have to pretend to themselves that their birthday falls when it does not, and there is nothing to stop them simply ignoring the instruction and answering the non-threatening question. Some people may not be convinced that the researcher will not be able to determine which question they have answered and so lie about their behaviour anyway. Whether respondents have either understood or followed the instructions cannot be directly checked. Some may also judge the question to be pointless as they cannot understand how it works. They may then not answer the question or, if they do, not follow the instructions.

It has been shown (Sudman and Bradburn, 1982) that the technique works effectively for subjects that are relatively unthreatening, eg having been involved in a case in a bankruptcy court, but that with more threat­ening subjects, eg drunken driving, it still significantly underestimates levels of behaviour.

This approach is limited to providing an estimate of the proportions answering ‘Yes’ and ‘No’ to the threatening question among the total sample, or among sub-groups that are of sufficiently large sample size for the assumptions regarding the proportions answering the non-threaten­ing question still to hold. As it is not possible to distinguish individual respondents who answered the threatening question, it is not possible to cross-analyse them against any other variables from the survey in order to establish, say, the profile of those who admit to the behaviour and that of those who do not.

What the technique achieves is providing an opportunity for the respondent to answer honestly. This means that, while it addresses ‘impression management’, it can do nothing about ‘self-deception’.

This technique would therefore appear to be a useful, if limited, tool provided that the subject is not too threatening. The difficulty is in determining when a topic is too threatening for this approach to be successful.

3. Face-saving questions

Face-saving questions give respondents an acceptable way of admitting to socially undesirable behaviour, by including in the question a reason why they might behave in that way. For example, if the questionnaire writer wishes to measure how many people have read the new edition of the Highway Code, instead of asking ‘Have you read the latest edition of the Highway Code?’ the writer could ask ‘Have you had time yet to read the latest edition of the Highway Code?’

The first question can sound confrontational, with an implication that respondents ought to have read the latest edition and be aware of current driving rules. This can force respondents on to the defensive, or to feel guilty about not having read it, and hence to lie and say that they have read it. The second question carries an assumption that respondents know that they ought to read it and will when they have the time. This is less confrontational, eases any guilt about not having read it and makes it easier for respondents to admit that they have not.

Work carried out in the USA (Holtgraves, Eck and Lasky, 1997) has consistently demonstrated over a series of studies that questions of this type can significantly reduce over-claiming of socially desirable knowl­edge (eg global warming, health care legislation, trade agreements and current affairs) and reduce under-claiming of socially undesirable behav­iour (eg cheating, shoplifting, vandalism, littering). However, the work is inconclusive regarding the impact of such questions when applied to socially desirable behaviour (eg recycling, studying, attending concerts). Questionnaire writers therefore can use this technique confident that it reduces SDB where knowledge is being asked about, or where the task is to get respondents to admit to undesirable behaviour. However, caution should be applied before using this technique to reduce over-claiming of desirable behaviour.

Care must also be taken with face-saving questions so as not to create a truly double-barrelled question. The question ‘Do you read a newspaper on a daily basis?’ might be expected to lead to over-claiming of a socially desirable behaviour. It would then be replaced with the question ‘Do you have the time to read a newspaper on a daily basis?’ This, however, now contains two clear elements – reading the newspaper and having the time. Some respondents may answer positively on the grounds that, although they do not read a newspaper daily, they do have the time to do so. Other respondents might give a negative answer because, although they do read a newspaper each day, they do not feel that they have enough time.

Another technique that has the effect of reducing threat in questions of knowledge is to use the phrase ‘Do you happen to know…’ at the begin­ning of the question. Rather than ask ‘How many kilometres are there in a mile?’ or ‘Do you know how many kilometres there are in a mile?’ the question should be ‘Do you happen to know how many kilometres there are in a mile?’ This softens the question and makes it less confrontational and has been shown to lead to an increase in the level of ‘Don’t know’ responses, suggesting that respondents find it easier to admit their igno­rance rather than guess.

4. Indirect questioning

A technique sometimes used in qualitative research is not to ask respon­dents what they think about a subject, but to ask them what they believe other people think. This allows them to put forward views that they would not admit to holding themselves, which can then be discussed. It can sometimes be possible to use a similar technique in a quantitative research questionnaire. However, in qualitative research the group moderator or interviewer can discuss these views and use his or her own judgement as to whether or not respondents hold these views themselves or simply believe that other people hold them.

In quantitative research both the structured nature of the interview and the separation of respondents and researcher make this far more difficult to achieve. The researcher is therefore left with uncertainty as to the proportion of respondents who projected their own feelings and the proportion who honestly reported their judgement of others.

5. Question enhancements

The questionnaire writer can take a number of other simple steps in order to help minimize SDB.

5.1. Reassure that behaviour is not unusual

Where there is a concern that people may misreport their behaviour, state­ments that certain types of behaviour are not unusual can be built into the question, to reassure respondents that whatever option they choose, their behaviour will be considered by the interviewer or by the researcher to be normal. For example, ‘Some people read a newspaper every day of the week, others read a newspaper some days a week, while others never read a newspaper at all. To which of these categories do you belong?’

5.2. Extended responses on prompts

In a similar way, extended responses on prompt material can suggest that extreme behaviour is not unusual and encourage honest responses. For example, when asking the amount of alcohol that people drink, the researcher can use prompts with categories that go well beyond normal behaviour, so that categories of mildly heavy drinkers appear mid-way on the list. This helps heavier drinkers to feel that their consumption might be of a more normal level than it actually is, and they may be more likely to be honest and not under-report. Care needs to be taken not to make light drinkers feel inadequate and so feel forced to over-report their weight of drinking. Having relatively small gradations at the lighter end of the scale, thus helping the lighter drinkers to see that they have more options, can help this (see Figure 12.2).

An alternative approach is to have broad categories, probably no more than three in total, so that respondents do not have to identify the amount too closely.

The second approach is likely to be preferred by respondents because they do not have to specify closely, which they may be reluctant to do either because they do not want to admit it or because they find it difficult to calculate. However, for most research purposes the broad categories supply insufficient data to the researcher for the required analyses.

This approach can be used as a first part of a two-part question. The first question is used to identify which of the three broad categories the respondent falls into and a second question is used to identify the amount more precisely within the category.

5.3. Identifying responses by codes

So that respondents do not have to articulate the response to the inter­viewer, code letters can be used against each of the prompted response categories and the respondent asked to read out the appropriate code letter. Respondents therefore do not have to read aloud the answer, which helps them to feel that a degree of confidentiality is being maintained. The interviewer of course knows to which response category each code applies, but respondent and interviewer do not have to share the infor­mation overtly (see Figure 12.3).

6. Bogus pipeline

One other approach should be mentioned, though it has little application in normal market research surveys: that is the bogus pipeline.

Respondents are physically connected to an apparatus that they are told can detect their true feelings and emotions. There is therefore no point in them not giving wholly truthful responses to the questions asked. This is, of course, not true, and the apparatus is bogus. This approach has been used and has been shown to reduce social desirability bias. There is concern though that, although the technique does affect responses, it may be because respondents answer more carefully and with more thought rather than because they are trying to be truthful.

However, because of the ethical issues it poses of deceiving members of the public about the capabilities of the apparatus and because of both the difficulty and cost of applying it, this is generally not an appropriate tech­nique to use in market research surveys.

Source: Brace Ian (2018), Questionnaire Design: How to Plan, Structure and Write Survey Material for Effective Market Research, Kogan Page; 4th edition.

Leave a Reply

Your email address will not be published. Required fields are marked *