What are trap questions?
You may have heard of trap questions before as they have a few names. These include ‘Red Herring’ questions, Attention Checks, and Instructional Manipulation Checks (ICMs).
They’re intended to ensure that respondents are thoroughly reading and analysing each question, as opposed to rushing through your survey.
In theory, those respondents who aren’t paying attention will select an incorrect answer. If you’re creating a survey with Kwiksurveys, you can apply Question Logic to a trap and disqualify respondents who fail.
This will prevent them from providing any further answers and potentially skewing your results. Alternatively, you could simply use a filter to exclude failed respondents from your Quick Report.
Example Trap Question:
Question: Which of these are animals?
D: All of the Above
It’s fairly obvious to any respondents reading your survey thoroughly that ‘Lettuce’ is the incorrect answer here. And so, all those who select it could be disqualified from your survey, and filtered from your results.
This isn’t the only way you can format these ‘traps’; you could include only one correct answer, instruct respondents which answer to choose, or utilise a host of other question types such as Matrices and Order Rankings.
It is thought that, by using trap questions you’d be able to identify these less engaged respondents and remove them from your results. Thereby persevering the integrity your data. However, there are several risks in employing this kind of tactic, which we’ll address a bit further down.
Why are trap questions useful?
If certain respondents are rushing through your survey, and not paying attention to their answer selections, your data set will not be representative of your target population.
This is a grievance to many survey creators, as you can spend a substantial amount of time and effort tailoring surveys to collect a specific set of data.
There are a few methods that these respondents use to speed through your surveys:
- Primary Answer Selection: Respondents do not attempt to engage with the questions and will choose the first answer option, or the first plausible answer they encounter.
- Positive Answer Choices (Acquiescence Response Bias): There is a tendency among respondents to solely select the positive answer options. The reasons for this could be implicit or explicit, but the data produced by these respondents would taint your results.
- Neutral Answer Choices: Where there are neutral answer options available, these respondents are likely to select answer options such as ‘Don’t Know’, ‘N/A’, or ‘No Opinion’, regardless of if it represents their views.
- Straight Lining (Non-Differentiation): These cases concern matrix questions, where respondents are asked to select an appropriate answer from a scale. Respondents may form a pattern with their answer choices, such as straight lines appearing in your results, which indicates a lack of thought given to questions.
- Speed Runs: Not unlike the examples above, these are the instances where respondents simply speed through your project without paying any attention to the questions asked, or their own responses.
For more information on the above, you can read our article on Response Bias.
Risks of Using Trap Questions
There are several risks to using trap questions:
- Losing Trust: Trapping respondents may cause lost trust, which can lead to non-completion or answer dishonesty.
- Hawthorne Effect: In specific relation to trap questions, this phenomenon is the feeling of being monitored or tracked. This is to say that, where respondents feel as if you’re looking for a particular answer, they adjust their answer selection to be what they deem more ‘socially acceptable’.
- Data Bias: It is our assumption that those respondents who fail to answer the trap question correctly aren’t paying attention. However, it is possible that certain demographic, or psychographic, groups are more likely to fail these questions without being guilty of those same actions. I.e. there may not be a direct correlation between those who answer the trap question incorrectly, and those who aren’t engaging. This assumption may lead to a biased set of results, as there is a risk of filtering out respondents who were perfectly legitimate.
- Failed Traps: In addition to the above, there is also the chance that respondents who aren’t paying attention will accidentally choose the correct answer option in the trap question. If these respondents are clicking through your project at random, then there is a quantifiable chance that they pick the correct answer at random.
- Respondent Awareness: Respondents may react negatively if they become aware you’re attempting to trap them. This could provoke them to answer dishonestly or leave the survey.
How to Best Use Trap Questions
There are more than a few risks to employing trap questions in your surveys. The most critical of these is the chance of alienating the respondents who’re willing to fully engage with your project.
You can minimise this alienation by placing a more obvious trap question at the beginning of your survey.
This will act as more of a deterrent for answer dishonesty rather than as an explicit trap for respondents.
It’s important to note that not all surveys will require trap questions, as not all surveys invite dishonest respondents. Answer honesty occurs for three main reasons; the survey is too long, too complex or offering an incentive.
Respondents can become fatigued by surveys that are too long or complex, and provide dishonest answers out of frustration. Of course, this is completely understandable as we’re essentially asking them to perform a task for free.
However, this doesn’t necessarily mean you should offer an incentive to stem answer dishonesty. This may encourage a more opportunistic type of respondent to speed through your survey in order to claim it. Whatever the reasons for answer dishonesty, it would certainly have a negative impact on your results.