Contents:
1. What is an authenticity check?
2. Which platforms support the authenticity check?
3. Best practices for using authenticity checks
• When to use authenticity checks
• When not to use authenticity checks
4. How authenticity checks work
5. Interpreting and acting on authenticity results
• Understanding the results display
• Taking action on flagged responses
• Quality assurance considerations
6. Benefits and value of authenticity checks
7. Additional support
Helpful resources:
1. How to create an API token for authenticity checks
2. How to add authenticity checks to your Gorilla study
3. How to add authenticity checks to your Qualtrics study
What is an authenticity check?
Prolific's authenticity check helps you secure genuine human insights by detecting when participants use external sources instead of providing their own authentic responses. Our behavioral pattern analysis identifies when participants leverage third-party websites or AI tools (like ChatGPT, agents, operators, or LLMs) rather than sharing their genuine thoughts and experiences.
This feature supports our commitment to delivering the highest-quality human data, ensuring your research captures authentic human perspectives rather than AI-generated or copied content.
Which platforms support the authenticity check?
Our authenticity check feature is currently available on Qualtrics, Gorilla, and Prolific's AI Task Builder. For Qualtrics and Gorilla: Implementation requires adding JavaScript code to your survey. Follow our platform-specific guides:
For AI Task Builder: The feature will be available directly within the interface.
Best practices for using authenticity checks
When to use authenticity checks
Authenticity checks are designed for free-text questions where you need genuine personal responses from participants. For optimal results:
- Be explicit in your instructions: Clearly state that participants should not use AI tools or third-party websites when responding.
- Set clear expectations: Our research shows that explicit instructions about not using external sources improves response authenticity by 61%.
👍 “Please share your personal experience with social media and how it has impacted your daily life. Write a thoughtful response of at least 150 words. Note: Please do not use AI tools or external sources - we are specifically interested in your genuine, personal experiences and thoughts.”
When not to use authenticity checks
Avoid using authenticity checks for tasks where participants are expected to:
- Research information from external sources
- Reference documents or websites as part of the task
- Use tools that are necessary for completing the assignment
🚫 “Please visit Wikipedia and research the history of coffee cultivation. Using the information you find, write a 150-word summary explaining how coffee production spread from Ethiopia to other parts of the world.”
How authenticity checks work
Our system monitors behavioral patterns while participants compose their responses, detecting actions that indicate content is being sourced externally rather than written authentically. Our model has been tested internally and is highly precise with very few false positives:
- 98.7% precision rate: When a participant is flagged, the model is almost always correct
- Only 0.6% false positive rate: Minimal risk of incorrectly flagging genuine responses
- Comprehensive detection: Identifies patterns consistent with ChatGPT, other LLMs, agents, operators, and content copied from websites
Interpreting and acting on authenticity results
Understanding the results display
Results appear in the authenticity check column with clear visual indicators:
- Green bar: All free-text responses show authentic interaction patterns
- Red bar: Suspicious patterns detected in all free-text responses
- Mixed bar: Some questions flagged (e.g., 50% green, 50% red if 2 out of 4 questions raise suspicion)

💡 Important: When we mention "external source” use, we are specifically referring to participants' behavior patterns while answering questions. See "How do authenticity checks work?" above for more details.
Taking action on flagged responses
When the system flags responses, we recommend:
- Review the response: Examine the content in context of your research question
- Consider participant explanations: If provided, evaluate their reasoning
- Make an informed decision:
- If clearly non-authentic: You can reject the submission
- If uncertain: Submit a data quality (DQ) ticket to Prolific Support for review
- If legitimate reason exists: Accept the submission
Quality assurance considerations
- Remember that while extremely accurate, no detection system is perfect
- Consistently communicate your authenticity requirements to participants
- When in doubt about flagged responses, our Support team can help evaluate edge cases
Benefits and value of authenticity checks
Implementing authenticity checks helps you:
- Collect higher-quality data by ensuring genuine human responses
- Reduce time spent on manual verification and filtering out non-authentic responses
- Build more representative datasets that truly reflect human perspectives
- Make better-informed decisions based on authentic human insights rather than AI-generated content
By securing authenticity at the data collection stage, you'll spend less time cleaning data and more time generating valuable insights.
Need further help?
If you have questions about implementing authenticity checks or interpreting results:
• Contact our Support team
• Check our Help Center for updates and additional guidance
• Submit a DQ ticket for help evaluating specific flagged responses