Skip to main content

How Prolific protects your data integrity

Here's an overview of the processes, systems, and safeguards we use to maintain data integrity - so you can consistently collect high-quality, human responses.

Updated this week

We take a layered approach to data integrity. This includes how we manage our participant pool, our in-study quality tools, performance-based quality controls, and maximizing participant performance.

Layer 1: Who gets to become a participant

We use a waitlist for participants and invite them selectively based on demand. This keeps the pool high-quality with strong demographic representation. The waitlist also deters fraudulent actors because participants can't join instantly.

When participants are invited from the waitlist, they must pass more than 50 verification checks before accessing studies.

Our onboarding process includes several checks to protect your data integrity:

    • Ensures you collect genuine human responses from people who match your selected demographics.

  • IP address validation and deduplication

    • Confirms you collect data from unique participants in your target locations.

  • Response quality assessments

    • Participants complete an onboarding survey that screens for low-effort responses and AI misuse, stopping low-quality data before it reaches your studies.


Layer 2: Ongoing fraud detection

Prolific runs continuous checks throughout each participant's journey to maintain data quality over time.

    • We confirm that participant accounts remain with their original owners, preventing account sharing or selling.

  • Regular required profile updates

    • Participants regularly update their "About You" answers to keep screener data accurate and current.

  • Bi-monthly quality audit of our pool

    • We spot and address emerging quality risks early, so your studies continue receiving reliable, high-integrity responses.

  • Ongoing behavior monitoring of our entire pool

    • Our systems monitor behavior across the entire participant pool to keep low-quality or inauthentic responses out of your studies


Layer 3: In-study quality tools

    • Our proprietary machine-learning model flags likely AI-generated responses with high precision. You can reject and replace them before they affect your results, giving you confidence that your data is authentically human.

  • Guidance for attention and comprehension checks

    • We provide guidance on adding attention and comprehension checks to your study. This helps you identify low-effort responses and reject participants who fail too many checks, keeping your data accurate and reliable.

  • Flagging "exceptionally fast submissions"

    • We flag unusually fast completions that don't match expected study time. Low-integrity or bot responses can be auto-rejected and replaced quickly, ensuring your dataset reflects genuine, thoughtful participation.


Layer 4: Performance-based quality controls

  • Participant quality records

    • We track each participant's quality based on researcher approvals and rejections, building a performance history over time.

    • This keeps the participant pool reliable and supports long-term data integrity.

  • Performance-based access controls

    • Participants with too many rejections lose access to studies, while consistently high-performing participants are prioritized.

    • This reduces low-quality participation and increases the likelihood that your studies receive high-integrity responses.

  • Quality-based screeners

    • You can use our free screeners to filter participants by experience and approval rate—for example, those who've completed 50+ studies with a 100% approval rating.

    • This helps you recruit participants with a strong track record and collect more reliable data.

  • Specialist participants (premium)

    • You can filter for participants with verified skills or qualifications relevant to your task, such as fact-checking, red-teaming, writing, or specific professional backgrounds.

    • This matches expertise to your study and improves data quality for specialized research.


Layer 5: Maximising participant motivation and performance

  • Fair rewards to support reliable participation

    • We enforce a minimum reward rate of £6 / $8 per hour and recommend higher ethical pay.

    • Fair pay improves participant motivation and retention, which supports more consistent, high-quality responses for your studies.

  • Ongoing participant wellbeing checks

    • We run regular psychometric wellbeing checks and publish a Wellbeing Report to monitor and improve participant experience.

    • A healthier, better-supported participant pool is more engaged and better able to complete complex or ongoing research reliably.

  • Human oversight and appeals

    • Our support team reviews appeals, overturns unfair decisions, and fine-tunes quality rules. We also run proactive spot checks of banned accounts to improve accuracy.

    • Quality controls stay fair and effective, helping retain high-integrity participants while excluding low-quality actors.


Helpful resources:

Did this answer your question?