Skip to main content

Prolific's Attention and Comprehension Check Policy

Why do you need checks in your study?

At Prolific we want our researchers to be confident that the participants in their studies are truly paying attention to what is being asked of them and understand the requirements of the research.

Although we do everything we can to ensure our sample as a whole exhibits high levels of attention and comprehension (see our recent Behavior Research Methods paper here), we also advocate the use of checks within researchers' surveys as an extra line of defence.

  • Attention checks are defined as simple ways to determine who is paying attention to your study instructions, and the amount of effort participants devote to reading questions before answering them (Oppenheimer Meyvis, & Davidenko, 2009). At Prolific we allow the use of Instructional Manipulation Checks (IMCs) and nonsensical questions to measure participant attention (please note that non-sensical questions are a new addition to our policy and were not previously allowed to be used on the platform - this guidance was added on 17/01/22)
  • Comprehension checks are used to measure participant's level of understanding of specific study instructions to ensure that they understand what is required of them


Our Attention Check Policy

We allow two different types of attention check on Prolific: instructional manipulation checks (IMCs), and nonsensical questions. Below we describe each in more detail including the rules for their use in studies on Prolific:

Instructional Manipulation Checks (IMCs)

IMCs explicitly instruct a participant to complete a task in a certain way, and are therefore designed to see whether or not a participant has paid attention to the question being asked.

Our criteria for a valid IMC:

  • They should check whether a participant has paid attention to the question, not so much to the instructions above it
  • Questions must not assume prior knowledge
  • Participants must be explicitly instructed to complete a task in a certain way (e.g. 'click 'Strongly disagree' for this question'), rather than leaving room for mis-interpretation (e.g. 'Prolific is a clothing brand. Do you agree?')
  • They must be easy to read (i.e., should not use small font, or have reduced visibility)
  • They cannot rely on memory recall
  • If your study is 5 minutes or longer, then participants must fail at least two checks to be rejected, any shorter studies can use a single failed check to reject
Audio/Video IMCs

Please note that any IMCs using video or audio format are subject to the same guidance as text-based checks. For example, the guidance of 'they must be easy to read' should be interpreted as easy to see on the video or easy to hear in an audio check. If these types of checks are used it is particularly important to ensure that memory recall is not required (i.e., the audio/video must be presented at the same time as the question).

A good example of an IMC:


  • This tests whether a participant has paid attention to the question itself. If they are attentive then they will check what colour they have been asked to enter before responding
  • The correct response is clearly defined - select 'Green' - rather than being open to misinterpretation
  • The participant is informed that their attention is being tested

A bad example of an IMC:


  • The question being asked does not reference the instructions and is therefore open to misinterpretation (i.e., a participant may just answer with their favourite colour)
  • This does not test whether the participant has paid attention to the question rather than the instructions
  • The correct response is needlessly confusing - 'select the second last option' - rather than being clear and explicit


Nonsensical Items

Another form of attention check is to embed a nonsensical item within a survey, to which only one or two responses to the question's options can be justified as objectively correct (see Paolacci, Chandler, & Ipeirotis, 2010).

Our criteria for a valid nonsensical item:

  • Questions must use a scale response
  • Questions must not assume prior knowledge
  • Participants should not be given a 'neutral' response option (i.e., no middle scale value)
  • Correct responses should either be extreme agreement or extreme disagreement, however any response indicating disagreement or agreement should be accepted
  • Participants who answer in the opposite manner to the objectively correct answer have failed
  • Be considerate of any unexpected answers that participants can justify. If it is possible to justify an answer to your check that isn't the way you intended then the check is not suitable
  • As with IMCs, if your study is 5 minutes or longer, then participants must fail at least two checks to be rejected, any shorter studies can use a single failed check to reject

A good example of a nonsensical item:


  • The statement presented has an objectively correct answer (nobody could swim across the Atlantic Ocean to get to work every day) and does not require any prior knowledge
  • Attentive participants should select 'Strongly disagree', however participants who selected 'Disagree' would also pass

A bad example of a nonsensical item:NS_bad.JPG

  • Even though there is an objectively correct answer to this (the 1980 Olympics didn't happen in Berlin, they were in Moscow), this assumes that the participant knows where the Olympics were in that year, and is therefore assuming prior knowledge
  • A 'neutral' response has been included, and is likely to be selected if a participant does not have prior knowledge of the subject
  • Because of these problems we would not allow any rejections on the basis of this check


Mixing IMCs and Nonsensical Items

Both IMCs and nonsensical items are classed as attention checks under our policy. Therefore, if your study is 5 minutes or longer, you are allowed to use one of each to reach the minimum of two attention checks


Our Comprehension Check Policy

Comprehension checks are used to test whether a participant has understood critical information that is integral to completing the study successfully. You should only use this type of check if, without it, the task couldn’t be completed properly

Our criteria for a valid comprehension check:

  • Participants must be free to re-read the key information at the time the comprehension check is presented
  • Participants must be given at least two chances to get a correct answer
  • These checks cannot involve free-text responses
  • Comprehension checks must be given at the start of the study so participants are not screened out after having put in significant time and effort
  • If a participant fails a comprehension check twice then they should be immediately asked to return their submission by closing the survey and clicking 'Stop Without Completing' on Prolific
  • Participants should never be rejected on the basis of these checks. If participants who have failed comprehension checks are appearing as 'awaiting review' then please contact the support team for help in returning the submissions

A good example of a comprehension check:

Instructions taken from Gordon et al. (2019)


  • Comprehension checks are presented alongside the instructions making it easy for the participant to review the information
  • Questions highlight that a participant should re-read instructions if they are not sure, and that they will have two chances to get this right
  • Response options are sufficiently distinct and the correct options use similar wording to the instructions

A bad example of a comprehension check:

Page 1


Page 2



  • Questions and instructions are on separate pages and there is no option to go back. These questions therefore rely too heavily on memory
  • No guidance is provided to the participant about how to re-read information or how many attempts they have
  • Questions are unnecessarily vague
  • Response options are confusing, and too similar, meaning that the correct responses are not clear

You should give the participant the best possible opportunity to read and understand your instructions. Even if they have not done this initially, the comprehension checks should prompt them to do so.


Payments and rejections based on checks

  • You can reject participants based on failed attention checks, if they‘re in line with the guidance above
  • You cannot reject participants based on failed comprehension checks, these participants should instead be asked to return their submission (if they do not respond to this request in 7 days contact our researcher support team and they can make the returns for you)
  • Please be considerate with reviewing all submissions. Always keep in mind that participants are real people who are helping you get the data you need!


An important caveat...

  • There is evidence that the use of attention checks can change a participants' behaviour in a study (Hauser & Schwarz, 2015), and may amplify, undo, or interact with the effects of a manipulation (Hauser, Ellsworth, & Gonzalez, 2018)
  • So before you add attention checks to your study, make sure that all necessary steps (see guidance above) have been taken to ensure that they do not negatively impact your results


Want a second opinion on your check?

If you have any questions about any of the above, or would like to get your check pre-approved by us, then please contact our researcher support team.


If you want to find out more, check out the resources below:


  • Gordon, A., Quadflieg, S., Brooks, J. C., Ecker, U. K., & Lewandowsky, S. (2019). Keeping track of ‘alternative facts’: The neural correlates of processing misinformation corrections. NeuroImage, 193, 46-56.
  • Hauser, D. J., Ellsworth, P. C., & Gonzalez, R. (2018). Are manipulation checks necessary? Frontiers in psychology, 9, 998.
  • Hauser, D. J., & Schwarz, N. (2015). It’sa trap! Instructional manipulation checks prompt systematic thinking on “tricky” tasks. Sage Open, 5(2), 2158244015584617.
  • Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of experimental social psychology, 45(4), 867-872.
  • Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on amazon mechanical turk. Judgment and Decision making, 5(5), 411-419.

Was this article helpful?
powered by Typeform