Skip to main content

What are authenticity checks?

Updated today

Contents:


What are authenticity checks?

Prolific’s authenticity checks help you collect genuine human data by identifying when responses or behavior may not reflect authentic human participation.

Authenticity checks fall into two distinct types, each designed for a different risk:

  • LLM authenticity checks: identify when participants are using LLMs (like ChatGPT) to answer free-text questions. These look for suspicious behaviors like copy-pasting and tab-switching on free-text questions only.

  • Bot authenticity checks: identify when AI agents or fully automated bots are answering your study. These look for non-human or scripted behaviour across all question types.

These checks address different use cases, so results are shown separately and should be interpreted independently.

Please note that when you opt in to authenticity checks, submissions to your study are not 'viewed' by Prolific. The authenticity check process is automated and as soon as it has run, the only data we have access to is the output score (the likelihood of non-original content generation, or non-human behavior).


LLM authenticity checks

What are LLM authenticity checks?

LLM authenticity checks detect when participants use AI tools or other third-party sources to generate free-text responses instead of writing their own original answers.

They analyze behavioral patterns in written responses that indicate the content may not be authentically human-generated, for example, the participant has copied and pasted an answer from ChatGPT or another LLM. Note that these checks focus on behavioral analysis and do not analyze the written words themselves.

These checks support studies where genuine personal expression, reasoning, or lived experience is essential.

When to use LLM authenticity checks

Use LLM authenticity checks for studies that include free-text questions and require participants to provide their own thoughts, opinions, or experiences.

Best practices:

  • Clearly state that participants must not use AI tools or external websites

  • Explain why authentic responses are important for your research

👍 Example instruction:

“Please share your personal experience with social media and how it has impacted your daily life. Write a thoughtful response of at least 150 words. Do not use AI tools or external sources — we are interested in your genuine personal experiences.”

Research shows that explicit instructions significantly improve response authenticity.

When not to use LLM authenticity checks

Do not use LLM authenticity checks if your study:

  • Does not include free-text responses

  • Requires participants to research information externally

  • Asks participants to summarise or reference external documents or websites

  • Requires tools or resources outside the study to complete the task

🚫 Example prompt:

“Visit Wikipedia and research the history of coffee cultivation. Write a 150-word summary of how coffee production spread globally.”

In these cases, external source use is expected and appropriate.

💡 Important: When we mention "external source” use, we are specifically referring to participants' behavior and content patterns while answering questions.


Bot authenticity checks

What are bot authenticity checks?

Bot checks detect whether a study has been completed by automated bots, agents, or scripts, rather than a human participant, for example, an AI agent answering on the participant’s behalf.

They focus on interaction patterns and behavioral signals, not the meaning or quality of written responses.

Bot checks help protect studies from:

  • Automated survey completion

  • Scripted task execution

  • Non-human interaction patterns

When to use bot checks

Bot checks are useful for:

  • Any study where genuine human participation is required

  • Studies vulnerable to automation or scripted behavior

  • Tasks where behavioral interaction patterns matter

You can use bot checks with or without LLM authenticity checks, since they address different risks.

When not to use bot checks

Do not use bot checks if your study:

  • Requires participants to use AI tools (including LLMs)

  • Requires participants to use a virtual machine

  • Requires participants to use accessibility tools

  • Must be completed on a specific device type (e.g., a tablet)


Which platforms support the authenticity check?

LLM checks are currently available on Qualtrics, and AI Task Builder.

Bot checks are currently available on Qualtrics.

To enable authenticity checks in Qualtrics, you'll need to add JavaScript code to your survey. Follow our platform-specific guides:

AI Task Builder: Authenticity checks are automatically enabled for every study. They're only relevant if your study meets the "when to use them" criteria above.

Note: Authenticity checks are not available for Taskflow studies.


Interpreting authenticity check results

Results are displayed in two separate columns: one for LLM authenticity checks and one for bot checks.

Each column shows one of three outcomes:

  • High (green): All responses have high authenticity

  • Mixed (orange):

    • For bot checks: this means some responses or behaviours were flagged. This can sometimes occur for legitimate reasons (for example, the use of accessibility tools). We recommend reviewing the submission and, where appropriate, contacting the participant to understand what may have caused this result.

    • For LLM checks: this means some questions were flagged, e.g. 2 out of 4 responses seemed inauthentic.

  • Low (red): Low authenticity patterns detected consistently

LLM checks: You can review each question individually by downloading the demographic data.

When no result is shown

A loading icon or question mark means we couldn't generate a check result for that submission.

This can happen when:

  • Data is still processing shortly after the participant submits their response

  • The study doesn't include any free-text questions (for LLM checks)

  • We couldn't retrieve assessment data from Qualtrics

When this happens, you won't see a check result for that submission. Review the response manually and approve, or reject it as you normally would.


Taking action on flagged submissions

When the system flags responses as high or mixed, we recommend:

  1. Review the submission: Examine the participant’s response in the context of your research question

  2. Consider participant explanations: If provided, evaluate their reasoning

  3. Make an informed decision:

    • If clearly non-authentic: Reject the submission

    • If legitimate reason exists or the data meets your requirements: Approve the submission

    • If still unsure: Return the submission

Prolific regularly analyses mixed and low authenticity results in data quality audits.


Quality assurance considerations

  • Remember that while extremely accurate, no detection system is perfect

  • Consistently communicate your authenticity requirements to participants

  • When in doubt about flagged responses, review the submission manually and approve, return, or reject it as you normally would.


Benefits and value of authenticity checks

Implementing authenticity checks helps you:

  • Collect higher-quality data by ensuring genuine human responses

  • Reduce time spent on manual verification and filtering out non-authentic responses

  • Build more representative datasets that truly reflect human perspectives

  • Make better-informed decisions based on authentic human insights rather than AI-generated content

By securing authenticity at the data collection stage, you'll spend less time cleaning data and more time generating valuable insights.


Need further help?

If you have questions about implementing authenticity checks or interpreting results:

  • Check our Help Center for updates and additional guidance

  • Contact our Support team using the icon at the bottom right of this page

Did this answer your question?