We’ve had occasional reports of high attention check failure rates on Prolific. We review every concern about this, and have collated our conclusions below:

First, the “failure rate” can depend on the questions and instructions that your attention checks are embedded in. If there are lots of similar questions and the attention check looks very similar to the remaining questions, then it is understandable why participants may not read the full text or instructions and thus fail your attention check. This is especially true when it can be assumed that the instructions are not essential for answering the questions correctly.

Second, real people look for shortcuts a lot of the time. They may skip whatever seems unnecessary to read, which does not mean that they provide bad quality data as such. The purpose of an attention check is to see whether or not a participant has paid attention to the question, not so much to the instructions above it. A fair attention check should be used as a measure of attention only when it is crucial for valid completion of a task. You may also want to read this article for more guidance and examples.

Third, participants on Prolific are considered to be more naĂŻve than participants on MTurk (see Peer, Brandimarte, Samat, & Acquisti, 2017). MTurk participants are more experienced survey takers than Prolific participants, so they may know how to pass attention checks without necessarily reading everything thoroughly.

In conclusion, keep the above in mind when designing your attention check, and don’t be afraid to pilot test and modify it! Please also do not hesitate to contact our Support Team, if you feel like your attention check failure rate is unreasonably high.

Source:

Peer, E., Brandimarte, L., Samat, S., & Acquisti, A.(2017).Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70, 153-163.


Need further help?
Click here to contact us