Read more about the pros and cons of collecting your data online.

Advantages

  1. Speed.
    The main advantage of a Prolific sample is speed. If your eligibility criteria are broad, participants should roll in quickly. The median amount of time it takes for a study of N = ~ 100 to fill up is 2-2.5 hours.
  2. Wide reach.
    By combining pre-screening criteria, Prolific allows you to target niche segments of the population. If your study has unusual requirements, the chances that Prolific can put you in touch with that group is much higher than recruiting them yourself from the local population.
  3. Comparable to lab studies.
    Despite the conditions of laboratory and online testing being very different, there is growing evidence that their findings are comparable. See Crump et al.'s article replicating many classical cognitive experiments on MTurk, or Peer et al.’s similar paper for Prolific and Crowdflower.
  4. More demographically representative than a lab sample.
    A lot of university research is conducted on undergraduate students. These samples are concentrated in a narrow age range (typically, age 18-23) and are highly educated. Sear’s 1986 paper reflects on the inadequacies of findings based on such a narrow participant group. In contrast, Prolific’s participant pool is older, has a range of education levels, and a range of employment experience.

Limitations

  1. Rapid-responder bias.
    Prolific predominantly uses convenience sampling, meaning most of our study places are filled on a on a first-come, first-serve basis. However, we have several mechanisms to reduce this bias, and fairly distribute study places among active participants. These mechanisms are balanced against sampling efficiency so your data collection doesn’t slow down too much. Nevertheless, unless your sample is very large, or very specific, a considerable portion of responses will come from participants who happen to be online at the time your study is launched, or in the hours immediately afterwards. When launching your study, consider the time of day and day of week. A study launched on a Monday morning will have a different respondent pool (shiftworkers, not-in-paid work, unemployed, part time, students, etc.) to a study launched on a Monday evening. We have participants from around the world (although mostly the UK and the US), so consider this when setting your pre-screeners and launching your study.

  2. Bias towards women, young, high levels of education (WEIRD bias).
    Prolific’s participant pool is not representative of any national population – it’s international! As of summer 2018, if you give your study very broad pre-screeners, then your sample is likely to have a slight female bias, with a younger and more highly educated population than you might expect. This type of bias is often called a WEIRD bias: This refers to the observation that most participant pools in the social sciences are biased towards Western, Educated, Industrialized, Rich and Democratic individuals (because they are predominately from the US and Europe).

  3. Selection Bias (topic, reward, length).
    Prolific’s study-portal serves as a study-marketplace, where participants can choose which studies they wish to take part in (of those they are eligible for). Participants can browse the available studies, reading descriptions, comparing hourly reward rates and average completion time. This means that every participant in your sample has chosen to do your study. Consequently, it is possible that the people who participate in your study differ systematically from the wider population: they may be particularly interested in the topic of your study, or were attracted by the reward’s magnitude. “Isn’t this also true for data that is not collected online?”, you might ask yourself now - and the answer is: yes. But as opposed to lab studies, there are some other biases connected to the selection bias in online samples. However, this is what you can do to reduce the selection bias.

  4. Maximum reward-per-hour bias (Satisficing).
    A minority of users on our site do see Prolific primarily as a way to make money, and do not care about the quality of their responses. This is known as satisficing, and refers to participants selecting answer choices in a study without carefully thinking about them (sometimes even at random). Their goal is to finish the study as quickly as possible. As a result, their responses will deviate greatly from the ones made by honest, diligent participants. We use many layers of checks to screen out such participants long before they make it into your study, but a handful may occasionally slip the net. There are methods for catching and detecting these satisficers, and not removing them from your sample could bias your results. For more detailed information on collecting high quality data, free of malicious participants, see this blog post.

Further reading:

More than fun and money. Worker Motivation in Crowdsourcing

Short-term rewards don’t sap long-term motivation

Figuring Out Preference or Balancing Out Effort: Do Inferences From Incentives Undermine Post-Incentive Motivation?

Not Just in it for the Money: A Qualitative Investigation of Workers' Perceived Benefits of Micro-task Crowdsourcing

A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation

An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets


Need further help?
Click here to contact us