Online crowdsourcing is the means of recruiting participants for studies using online platforms, such as Prolific, rather than running studies in person, such as in a lab. In a post-pandemic world, online crowdsourcing is becoming a more and more popular way of running research.

The question is, what are the advantages, and potential disadvantages, of taking this online approach?

Here are some helpful papers making the case for online crowdsourcing:

  1. This paper shows that online crowdsourcing can be legitimately used for collecting data and that it allows to collect high quality data inexpensively and rapidly.
  2. Here's a paper showing that results from a lab experiment were indistinguishable when replicated online.
  3. Here's a more nuanced perspective on lab vs. online experiments: some effects replicate online, others don't.
  4. This one is a great example of a high-impact paper published in the journal Nature using MTurk samples.
  5. Finally, this new paper shows that Prolific is better suited for scientific research than MTurk.

You can read more on the advantages and limitations of online samples below:


Advantages

  1. Speed. The main advantage of a Prolific sample is speed. If your eligibility criteria are broad, participants should roll in quickly. The median amount of time it takes for a study of N = ~ 100 to fill up is just under 3 hours.
  2. Data quality. Another key advantage is Prolific’s data quality. In measures such as comprehension, attention, and dishonesty, Prolific’s participants came out on top in comparison to other crowdsourcing platforms. Check out Peer et al.’s paper for more details. We achieve this through regular checks of our participant pool, and you can read more about this in our blog post.
  3. Wide reach. By combining pre-screening criteria, Prolific allows you to target niche segments of the population. If your study has unusual requirements, the chances that Prolific can put you in touch with that group is much higher than recruiting them yourself from the local population.
  4. Comparable to lab studies. Despite the conditions of laboratory and online testing being very different, there is growing evidence that their findings are comparable. See Crump et al.'s article replicating many classical cognitive experiments on MTurk, or Peer et al.’s similar paper for Prolific and Crowdflower.
  5. More demographically representative than a lab sample. A lot of university research is conducted on undergraduate students. These samples are concentrated in a narrow age range (typically, age 18-23) and are highly educated. Sear’s 1986 paper reflects on the inadequacies of findings based on such a narrow participant group. In contrast, Prolific’s participant pool is older, has a range of education levels, and a range of employment experience.

Limitations

  1. Rapid-responder bias. Prolific predominantly uses convenience sampling, meaning most of our study places are filled on a first-come, first-serve basis. However, we have several mechanisms to reduce this bias, and fairly distribute study places among active participants. These mechanisms are balanced against sampling efficiency so your data collection doesn’t slow down too much. Nevertheless, unless your sample is very large, or very specific, a considerable portion of responses will come from participants who happen to be online at the time your study is launched, or in the hours immediately afterwards. When launching your study, consider the time of day and day of the week. A study launched on a Monday morning will have a different respondent pool (shift workers, not-in-paid work, unemployed, part-time, students, etc.) to a study launched on a Monday evening. We have participants from around the world (although mostly the UK and the US), so consider this when setting your pre-screeners and launching your study.
  2. Bias towards women, young, high levels of education (WEIRD bias). Prolific’s participant pool is not representative of any national population – it’s international! If you do not apply prescreeners or balance your sample then it is likely to have a female bias, with a younger and more highly educated population than you might expect. This type of bias is often called a WEIRD bias: this refers to the observation that most participant pools in the social sciences are biased towards Western, Educated, Industrialized, Rich and Democratic individuals (because they are predominately from the US and Europe).
  3. Selection Bias (topic, reward, length). Prolific’s study portal serves as a study marketplace, where participants can choose which studies they wish to take part in (of those they are eligible for). Participants can browse the available studies, reading descriptions, comparing hourly reward rates and average completion time. This means that every participant in your sample has chosen to do your study. Consequently, it is possible that the people who participate in your study differ systematically from the wider population: they may be particularly interested in the topic of your study, or were attracted by the reward’s magnitude. This is what you can do to reduce the selection bias.
  4. Maximum reward-per-hour bias (satisficing). A minority of users on our site do see Prolific primarily as a way to make money and do not care about the quality of their responses. This is known as satisficing, and refers to participants selecting answer choices in a study without carefully thinking about them (sometimes even at random). Their goal is to finish the study as quickly as possible. As a result, their responses will deviate greatly from the ones made by honest, diligent participants. We use many layers of checks to screen out such participants long before they make it into your study, but a handful may occasionally slip the net. There are methods for catching and detecting these satisficers, and not removing them from your sample could bias your results. For more detailed information on collecting high-quality data, free of malicious participants, see this blog post.

Further reading

More than fun and money. Worker motivation in crowdsourcing

Short-term rewards don’t sap long-term motivation

Figuring out preference or balancing out effort: Do inferences from incentives undermine post-incentive motivation?

Not Just in it for the money: A qualitative investigation of workers' perceived benefits of micro-task crowdsourcing

A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation


Need further help?
Click here to contact us