Skip to main content

How to prevent and detect AI or Large Language Model (LLM) use in studies

Updated today

Some studies require participants to share their own thoughts, experiences, or reasoning without using AI tools such as ChatGPT or other Large Language Models (LLMs).

Prolific provides tools to help researchers identify potential AI use and collect authentic human responses.

This guide explains how to:

  • detect possible AI-generated responses

  • discourage participants from using AI tools in your study

For an overview of Prolific’s broader safeguards, see How Prolific protects your data integrity.


How to prevent AI or LLM use in your study

Clearly instruct participants not to use AI tools

Participants may use AI tools simply because they do not realise it is not allowed.

If your study requires original responses, state this clearly in your instructions.

Example instruction:

Please write your answer based on your own thoughts and experiences. Do not use AI tools or external sources. We are interested in your genuine personal perspective.

Clear instructions can improve response authenticity.


Use authenticity checks when appropriate

LLM authenticity checks are most useful for studies that include free-text questions requiring personal opinions, reasoning, or experiences.

Do not use LLM authenticity checks if your study requires participants to research information, summarise external sources, or use tools outside the study.

For more information, see What are authenticity checks?


How to detect AI or LLM use in your study

Use LLM authenticity checks

LLM authenticity checks help identify when participants may be using AI tools to generate free-text responses.

These checks analyze behavioral signals associated with external content generation, such as copying and pasting or switching tabs during free-text questions. The system does not analyze the written content itself.

Results are shown in three categories:

  • High authenticity: responses appear authentically human

  • Mixed authenticity: some responses may show external source use

  • Low authenticity: strong patterns suggesting non-authentic responses

If a submission is flagged, review the response in the context of your study and decide whether to approve, return, or reject it.

For detailed guidance, see What are authenticity checks?


Review unusually fast submissions

Responses submitted much faster than the expected completion time may suggest low effort or outside assistance.

Prolific flags exceptionally fast submissions so you can review them before you approve.


Review responses when needed

Manual review can help identify responses that need closer attention.

Signs that may warrant review include:

  • responses that are unusually long given the time taken

  • text that appears copied or pasted

  • answers that follow a generic or templated structure

These signals are not definitive proof of AI use. Always review responses in the context of your study before taking action.


When AI tool use may be acceptable

Some studies intentionally require participants to use AI tools or external resources.

If your study involves researching information, summarising external content, or interacting with AI systems, participants may need to use these tools to complete the task.

In these cases, LLM authenticity checks may not be appropriate. See What are authenticity checks? for guidance.


If you need help setting up authenticity checks or reviewing flagged submissions, contact our Support team using the icon at the bottom right of the page.

Did this answer your question?