Types-Of-Response-Bias

Response Bias: Types and Prevention Methods

What is Response Bias?

Response bias (also referred to as survey bias) is a documented phenomenon in survey science that concerns the tendency in respondents to provide dishonest or misleading answers when responding to certain stimuli. The reasons for this could be conscious or unconscious, and is often be a result of:

  • Response Bias caused by Question and Answer Design
  • Respondent Fatigue
  • Question/ Answer Order
  • Poor Format, Structure and Wording

This bias poses a risk to the integrity of your results, as the answers provided wouldn’t be accurate. Unfortunately, it’s not always possible to prevent or detect bias, but this article will help you identify them where possible.

Satisficing

Satisficing is an amalgamation of the terms ‘satisfy’ and ‘suffice’, and is define as ‘what is sufficient to obtain a satisfactory outcome’. I.e. disengaged respondents who only intend to fulfil the minimum requirements of a project; which is to submit a response, regardless of its accuracy. These respondents may leave questions unanswered, or even select answer choices that do not represent their opinions or experiences, which in-turn skews your results.

Below, you’ll find a list of other types of response bias that display the attributes of satisficing:

Speed Runs

Some respondents speed through your surveys/ questionnaires without paying any attention to your questions or their answers. This may occur because you’re offering an incentive to respondents, and they just want a chance at the reward. Alternatively, it could be a consequence of survey fatigue, where participants have become mentally-taxed and rush through questions simply to end the completion process.

Tip: You will often find that the resulting answer selections from this type of bias will not logically correlate with each other when compared. This can be a time-consuming method of detecting bias, but may be essential to protecting your data set.

Straight lining and Patterning

Respondents who aren’t engaging with your survey will often form a pattern in your data set:

Straight Lining: Respondents select the same numbered answer for each question (usually Opinion/ Likert Scale question types), e.g. they choose answer option 2 for every question. This will manifest as a straight line in your results.

Patterning: Participants alternate between a few answer options, creating an identifiable pattern in your results.

Note: More sophisticated respondents, who’re actively providing dishonest answers, may completely avoid creating a pattern when selecting answers. These cases are more difficult to detect, but are less likely to occur.

Non-Differentiation

Where scaled questions are concerned (Strongly Disagree – Strongly Agree), there’s a risk of respondents failing to differentiate between answer choices. In these cases, they may give identical, or similar, responses to all questions using the same scale.

E.g. participants might assign a rating of 3 to all scales, regardless of whether it reflects their opinions or experiences.

Note: Also known as ‘straight lining’, as the answer choices will often take the form of straight lines, or other patterns, in your results.

Tip 1: It is easier to detect non-differentiation by flipping the order of positive and negative scale extremes every so often.

E.g.

Scale 1:

Strongly Disagree – Strongly Agree

Scale 2:

Strongly Agree – Strongly Disagree

Scale 3:

Strongly Disagree – Strongly Agree

Tip 2: To stem the occurrence of satisficing in your respondents, avoid employing long sets of repetitive scales (and any other question types) as this is can be mentally-taxing, and lead to respondent disengagement.

Acquiescence Response Bias

This bias entails the selection of answer options with positive connotations, and is also known as ‘yea-saying’. There are a couple of reason for which respondents exhibit this type of bias:

  1. Respondents display the tendency to agree with a statement when in doubt over which answer option to choose.
  2. Respondents are adhering to ‘Social Desirability Bias’, and selecting answer options they assume the survey creator wants.

Tip: It’s difficult to limit the occurrence of acquiescence response bias, as it is often an innate socially-constructed response. However, balancing the number of positive and negative answer choices will reduce the risk.

Neutral Answer Selection

This type of bias concerns respondents who continually select the neutral answer options. These include options such as ‘Don’t Know’, ‘N/A’, and ‘No opinion’, where participants are given the opportunity to opt-out of making a selection.

Tip: It’s not likely that every question will apply to each respondent, so it’s beneficial to offer a neutral answer option. This way you won’t be collecting incorrect responses that participant were forced to give.

Extreme Responding

Not unlike the above, Extreme Responding is the respondent disposition to select only the extreme answer options. It mostly concerns scaled questions, where participants are able to choose options such as ‘Strongly Agree’ and ‘Strongly Disagree’. Although cultural and social differences in respondents can be a cause of extreme responding, it is often a result of leading wording. This wording makes participants feel that they need fully agree or disagree with a statement, rather than be honest. To avoid this, ensure your question wording is as neutral as possible.

Tip: Make your survey anonymous. Respondents won’t feel like you’re monitoring their answer choices, and will provide more accurate answers.


Order Effects

The order of elements in your survey has the potential to influence the answer selections your respondents make. I.e. the order of you questions can create a response bias. This is to say that; some element of Question 1 could affect the choice a participant makes for Question 5 if the questions are contextually linked. This is particularly true in question directly preceding the respondent’s current question.

There are two kinds effects this type of bias has:

Contrast Effects: The order of questions results in great differences in respondent answer choices.

Assimilation Effects: The order of questions results in answer selections becoming more similar between respondents.

Tip: This is one of the most difficult forms of bias to prepare for, as the causes can be unpredictable. However, if time is no concern we recommend A/B testing your survey. This will allow you to test two or more formats and identify any possible order effects.

Primacy Effects

The order of answer can also affect the answer choices respondent make. Primacy Effects pertains to respondents that select the first available answer option for each question. This could be for a myriad of reasons, both implicit and explicit, but will certainly have negative consequences for your results.

Tip: Although there’s no sure-fire way to stop respondents from exhibiting this form of satisficing, by randomising the order of your answer options you will decrease the amount of times a specific answer option is chosen as a result of Primacy Effects. In doing so, you will stop your findings being too unfairly weighted towards one option.

Recency Bias

Recency Bias concerns those respondents who read the available answer options, then select the last option rather than re-evaluating all the answers.

Tip: Randomise the answer order of your answer options. This will reduce the amount to which any individual option is chosen.


Other Types of Response Bias

Demand Characteristics

Having been discovered as a bias affecting the results of scientific experiments, demand characteristics describes those involved in a study who alter their behaviour to align with how they believe the ideal research participant should respond or act.

This bias also applies to the realm of surveys, as respondents are prone to altering their views to meet the aims and objects of that project. Alternatively, if the respondent’s participation is mandatory, then they may purposefully make answer selections that hinder the aims of the research.

Tip: Distance participants from the hypothesis/ aims of your research if you believe it could elicit this type of bias.

Social Desirability Bias

Where self-reporting is concerned, respondents don’t always provide truthful answers. If they feel that the set questions are ‘transparent’, they may speculate as to what answers are ‘expected’ of them, and provide those in place of their own.

Respondents affected by this bias will often over-report on good behaviours, and under-report on bad behaviours. This is one of the most likely biases to occur because we all want others to view us favourably, especially on topics such as:

  • Abilities and Skills
  • Personality
  • Sexual Behaviour
  • Religion and Spirituality
  • Financial Earning
  • Unlawful Behaviour (e.g. drug use)

Tip: Respondents will feel less like you’re watching them if your survey is anonymous. Therefore, they’re more likely to provide truthful representations of their experiences and opinions.

Prestige Bias

Many use the term ‘Prestige Bias’ as synonymous with Social Desirability Bias, but we believe it to be a bit different. It arises in cases where respondents are asked about their social, educational and financial statuses, and will incur inflated responses. This is to say that, respondents report these statuses in such a way that they are perceived to have more social power, an advanced level of education, and a more favourable financial situation.

Tip: Neutrally worded questions and answer options will reduce the occurrence of Prestige Bias.

Non-Response Bias

In cases where your sample is systematically different to your target population, your data may become non-representative of that population. This would occur if the majority of your respondents exhibit similar traits or opinions, causing your results to weighted towards a particular demographics or set of ideologies, and is also known as ‘participation bias’.

Tip: This is an exceedingly difficult bias to suppress or identify, but you can reduce the risk by sharing your survey through a range of platforms and to as many diverse groups as possible.

Self-Selection Bias

This concerns those respondents who intentionally inject themselves into a study. It’s similar to non-response bias as the respondent pool elicits different set of responses to those who aren’t responding. Problems arise from this type of bias as, for the most part, surveys require a randomised audience in order for its results to be representative of a population, and to have any meaningful significance.

Note: Although this wouldn’t cause problems for employee feedback surveys, or other similar projects, but would pose a problem for academic and market research.

Hostility Bias

Asking respondents about unpleasant memories or negative experiences can make them hostile, which will in-turn affect, not only the question they’re answering, but also all their following responses. Sensitive questions concerning topics such as divorce, debt, and death can often provoke this type of response bias.

Tip: Avoid sensitive subjects (or those which may elicit negative/ hostile responses) where possible, and frame question language neutrally.

Sponsorship Bias

When respondents are aware of survey sponsorship or branding, their perception of that organisation can influence their responses. E.g. if a respondent is filling out a survey concerning car rental, which is sponsored by Enterprise, their opinion concerning Enterprise will affect their answers.

Tip: Obscure any sponsorships to respondents, unless the branding is essential to the understanding of the project.

Stereotype Bias

Socio-economic based questions (such as technical skill, nationality, gender, and age) may reinforce or trigger a stereotype in the mind of the respondent. This will lead them to fulfil that stereotype in the context of their response. For example, if the language in your project implies that younger people are better with technology, that idea will influence their responses.

Tip: Frame all questions and answers as neutrally as possible.

Recall Bias

Our memories aren’t always as reliable or accurate as we might like. When you ask respondents to recall past events, especially if those events are infrequent (such as a one-off purchase), they tend to align memories with their current beliefs.

Tip: If you need to enquire about past events with respondents, ensure one of two things. Either that the event in question is frequently occurring, or that it took place in the recent past.

Unfortunately, it would be impossible to prevent all these types of response bias from affecting your project. People do not respond passively to stimuli, and instead integrate multiple ideologies and experiences into their each response.

By being mindful of biases when creating a survey, you’re able to reduce their affect your data. However, the presence of bias in your results doesn’t necessarily mean they’re useless. You could apply your knowledge and discuss the implications they have on your results, showcasing a deeper level of interpretation.

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email