Prolific is a closed participant pool. Entry is selective and not publicly accessible.
Every participant undergoes identity verification before accessing any study and is subject to continuous monitoring throughout their time on the platform.
This document describes the systems Prolific uses to ensure data integrity, so that researchers can reference them in journal submissions, ethics applications, and reviewer responses.
Understanding the Threats to Online Data Quality
Discussions about data integrity in online research often use terms like "bots," "AI," and "fraud" interchangeably. In practice, these refer to distinct threats that differ in how they work, how common they are, and how they can be detected. We distinguish four categories:
Fraudulent accounts are created by individuals who fake their identity to access studies they are not eligible for, or to participate multiple times. This includes the use of stolen or fabricated documents, VPNs, or duplicate accounts. These are addressed primarily through identity verification and account-level monitoring.
Traditional bots are automated scripts designed to complete surveys with little or no human involvement. These have existed for years and tend to produce random, low-effort, or nonsensical responses. They are generally the easiest threat to detect because they cannot convincingly mimic genuine human behaviour.
AI-powered bots (also referred to as agentic AI or autonomous agents) represent a more sophisticated threat. These are software systems designed to complete surveys entirely without human involvement. An agent might parse questions, generate answers, and simulate human-like behaviour such as mouse movements and typing. While this category has received significant attention, current evidence suggests it remains extremely rare in practice on platforms with robust identity verification. It is, however, an area of active monitoring and development.
AI-assisted participation occurs when a real, verified human participant uses tools such as ChatGPT to help generate their answers, most commonly for open-ended text questions. The participant is genuine, but part of their response is not. This is currently the most commonly observed form of AI-related concern in online research.
Each of these threats requires different countermeasures, and Prolific's quality systems are designed to address all four. The sections that follow describe how.
Quality Assurance System: Protocol
Prolific applies a multi-layered quality assurance system, called Protocol, consisting of 50+ automated checks. These checks operate at every stage, from initial registration through to study completion, and are included by default; researchers do not need to enable them.
Figure: Quality Assurance Protocol
Note. This figure highlights key checks relating to each layer of Protocol: Prolific’s multi-layered system.
Layer 1: Entry Requirements
Access to Prolific is restricted. Only 13% of waitlist applicants are invited to proceed, and of those, only 55% pass onboarding. Before a participant can access any study, they must complete:
Identity verification via a live video selfie and government-issued document check, powered by Entrust’s identity verification technology (0.1% fraud rate, 0.01% false acceptance rate). The system also detects manipulated documents and requires a physical human face matched to the real ID.
Phone and email verification
IP validation and deduplication to identify multiple accounts from the same individual
Quality screening for signals of AI misuse, speeding, and low-effort behaviour during onboarding
For a detailed walkthrough of how each verification step addresses bots, AI agents, and fraud, see How Prolific detects bots and AI in online research.
Layer 2: Ongoing Monitoring
Participants who pass onboarding continue to be monitored on an ongoing basis:
Surprise identity re-checks require participants to complete new video selfies at unannounced intervals
Bi-monthly quality audits are conducted across the entire participant pool
Behavioural monitoring flags patterns consistent with automation
A dedicated data quality team conducts hundreds of manual reviews daily
Layer 3: In-Study Detection
In addition to platform-level protections, Prolific offers tools that operate during data collection itself.
Authenticity Checks are an optional integration researchers can add to their Qualtrics surveys. These checks have two components, each targeting a different threat:
Bot authenticity checks analyse behavioural signals to identify automated environments, for example, patterns in mouse movement and typing speed that are consistent with software rather than a human respondent. These checks run across the entire survey regardless of question type.
LLM authenticity checks identify signs that a human participant is using an AI tool to generate their responses. This includes monitoring for behaviours such as copy-pasting and tab-switching, and is applicable to questions that prompt open-ended text answers.
Together, the system achieves 98.7% precision, 78.9% recall, and 88.4% overall accuracy in testing. In practical terms, when a response is flagged as AI-generated, the flag is correct 98.7% of the time (false positive rate: 0.6%), minimising the risk of genuine participants being incorrectly flagged. The system detects approximately four in five AI-assisted responses, and overall classification accuracy across both genuine and non-genuine responses is 88.4%. These detection capabilities operate alongside Prolific’s platform-level protections (Layers 1-2). Prolific continues to refine detection sensitivity as new forms of AI-assisted responding emerge.
Additionally, Prolific automatically flags exceptionally fast submissions, where participants complete studies much faster than the median participant timeframe.
Prolific advises all researchers to include their own attention or comprehension checks as an additional layer of quality assurance.
Layer 4: Performance Tracking
Prolific maintains a dynamic quality tracking system informed by researcher feedback:
Participants receive quality scores that update based on researcher approvals, rejections, and reports.
Participants who accumulate excessive rejections are removed from the pool.
Banned participants cannot rejoin Prolific.
Layer 5: Participant Engagement and Retention
High-quality data depends on motivated, engaged participants. Prolific’s approach to participant experience is designed to retain reliable participants and create conditions that encourage attentive, effortful responding.
Prolific enforces a minimum pay rate of $8/£6 per hour across all studies and recommends a rate of £12 per hour. Research on survey response quality has consistently linked adequate compensation to higher engagement and lower rates of satisficing behaviour (e.g., Douglas et al., 2023; Ritchey et al., 2023). When participants feel fairly paid, they are more likely to give considered, attentive responses rather than rushing through to maximise their hourly return.
A dedicated support team handles participant appeals, reviews edge cases, and conducts proactive audits of flagged accounts. This ensures that quality enforcement remains accurate and that participants are treated fairly; both of which are important for maintaining engagement and trust within the pool.
These measures contribute to a participant pool with strong engagement and high willingness to return. Retention rates support the feasibility of longitudinal and multi-wave research designs.
Bot, AI, and Fraud Prevention
Concerns about automated and AI-assisted responding in online research have increased significantly in recent years. Prolific's approach to this threat operates on two levels: preventing bad actors from entering and remaining in the pool (Layers 1–2 above), and detecting non-genuine behaviour within studies (Layer 3-4 above).
It is worth noting that these protections are cumulative. An automated agent attempting to participate in a Prolific study would need to pass identity verification including a live video selfie, maintain consistent behavioural patterns over time, avoid detection by ongoing quality and identity audits, and, if the researcher has enabled Authenticity Checks, produce responses that are behaviourally consistent with genuine human participation across multiple signals.
AI-assisted responding presents a different challenge, because the participant themselves is genuine. Here, Prolific's defenses operate at different layers. During onboarding, participants who show signs of AI misuse are screened out (Layer 1). Participants who have been reported for AI use in previous studies are flagged and may be removed from the pool (Layers 2 and 4). Within a study, if the researcher has enabled Authenticity Checks, the LLM authenticity checks monitor for behavioural patterns associated with AI-assisted answering, such as copy-pasting and tab-switching (Layer 3). Internal audits suggest that AI-assisted responding among Prolific participants currently sits below 1%.
As new forms of automated or AI-assisted responding are identified, countermeasures are developed and deployed. Some measures are not disclosed publicly to avoid enabling workarounds.
Citable Metrics
The following figures are drawn from Prolific's internal auditing and third-party verification partners, and can be cited in publications using the suggested citation below each metric:
Metric | Figure | Source |
Identity verification fraud rate (false acceptance of fraudulent documents or identities) | <0.1% | Entrust identity verification technology
Prolific. (2025, November 28). How Prolific detects bots and AI in online research. https://www.prolific.com/resources/how-prolific-detects-bots-and-ai-in-online-research |
Participants flagged for AI-generated responses | <0.1% | January 2026 internal audit
Prolific. (2026). Internal data quality audit, January 2026. Unpublished internal report. Prolific. |
Overall study rejection rate (all studies, 2025) | 0.5% | Prolific internal data
Prolific. (2025). Prolific: Setting standards for authentic human data collection. https://www.prolific.com/resources/prolific-setting-standards-for-authentic-human-data-collection |
Note: The identity verification fraud rate reflects the performance of Entrust’s video liveness and document verification technology, which Prolific uses to verify all participants at onboarding.
This figure represents the rate at which fraudulent identities pass the verification step, not an overall platform-level fraud rate. For further detail, see How Prolific detects bots and AI in online research.
A note on interpreting the rejection rate: the low overall rejection rate reflects the effect of upstream filtering. Because identity verification and behavioural monitoring remove most problematic participants before they reach a study, in-study rejection rates remain low. A low rejection rate should not be interpreted as an absence of quality controls, but as evidence that those controls are operating at earlier stages.
Comparative Evidence
Several peer-reviewed studies have independently compared data quality across online research platforms. Prolific has consistently performed at or near the top of these comparisons.
Study | Platforms Compared | Key Findings Relevant to Prolific |
Kay, C.S., & Vlasceanu, M. (2026). A scale for detecting LLM-generated responses in online survey research [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/4p7ns | CloudResearch Connect, Prolific, Qualtrics, Forthright, MSI-ACI | Near-zero Prolific participants were flagged as synthetic, comparable to Connect and substantially lower than Qualtrics (~9%), Forthright (~5%), and MSI-ACI (~4%). The study also proposes a transparent, non-proprietary 4-item detection framework (ECLAIR) that achieved 97.78% correct classification of synthetic respondents, which could complement platform-level detection tools. |
Esch, D. T., Mylonopoulos, N., & Theoharakis, V. (2025). Evaluating mobile-based data collection for crowdsourcing behavioral research. Behavior Research Methods, 57(106). https://doi.org/10.3758/s13428-025-02618-1 | MTurk, Prolific, Qualtrics, Pollfish | Prolific and MTurk outperformed Pollfish and Qualtrics on standard data quality measures. Mobile-first platforms like Pollfish reach different demographics but with lower attention/comprehension scores. Highlights a potential representativeness gap for desktop-first platforms including Prolific. |
Douglas, B. D., Ewell, P. J., & Brauer, M. (2023). Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA. PLOS ONE, 18(3), https://doi.org/10.1371/journal.pone.0279720. | MTurk, Prolific, CloudResearch, Qualtrics, SONA | Prolific delivered the best cost per high-quality respondent at $1.90 (vs. CloudResearch $2.00, MTurk $4.36, Qualtrics $8.17). Prolific recall accuracy was 83.47% vs. MTurk's 52.20% |
Albert, D.A., & Smilek, D. (2023). Comparing attentional disengagement between Prolific and MTurk samples. Scientific Reports, 13(20574). https://doi.org/10.1038/s41598-023-28231-4 | MTurk, Prolific | MTurk participants exhibited higher attentional disengagement than Prolific participants. The authors suggest Prolific's policy on attention-check questions may contribute to lower disengagement. Note: small sample (N=290 across two experiments). |
Peer, E., Rothschild, D., Gordon, A., Evernden, Z., & Damer, E. (2022). Data quality of platforms and panels for online behavioral research. Behavior Research Methods, 54, 1643–1662. https://doi.org/10.3758/s13428-021-01694-3 | MTurk, Prolific, CloudResearch, Qualtrics, Dynata | Only Prolific delivered high data quality across all four measures (attention, comprehension, honesty, reliability) without quality filters. With filters enabled, CloudResearch matched Prolific, but MTurk remained poor. |
Tang, J., Birrell, E., & Lerner, A. (2022). How well do my results generalize now? The external validity of online privacy and security surveys [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2202.14036 | MTurk, Prolific (two samples per platform) | Both Prolific samples generalised better than MTurk. The Prolific representative sample was the most representative overall, though it took significantly longer to recruit (49 hours vs. 2–2.5 hours) and cost more. Prolific was generally representative for user perceptions and experiences but less so for security/privacy knowledge. |
Transparency note: In Peer et al. (2022), four of five authors were affiliated with Prolific. Kay & Vlasceanu, 2026 and Tang et al., 2022 are preprints that have not yet undergone peer review. All other studies listed are peer-reviewed and independently authored.
Further Resources
The following resources provide additional context and may be useful when responding to reviewer questions or preparing submissions:
Writing About Prolific in Your Research: Template language for methods sections, IRB applications, and consent forms.
Responding to Reviewer and Editor Concerns: Answers to common reviewer objections about online samples and data quality.
This document is maintained by Prolific and is updated as new safeguards, audit data, or independent research become available. Last updated: 26 March 2026.
For questions about specific safeguards or to request additional documentation for a journal submission, contact Support using the icon at the bottom right of this page.
