acm-header
Sign In

Communications of the ACM

Viewpoint

Responsible Research with Crowds: Pay Crowdworkers at Least Minimum Wage


crowdsourcing, illustration

Credit: Crowdsourced Testing

Crowdsourcing is increasingly important in scientific research. According to Google Scholar, the number of papers including the term "crowdsourcing" has grown from less than 1,000 papers per year pre-2008 to over 20,000 papers in 2016 (see the accompanying figure).

uf1.jpg
Figure. Papers in Google Scholar that use the term "crowdsourcing."

Crowdsourcing, including crowdsourced research, is not always conducted responsibly. Typically this results not from malice but from misunderstanding or desire to use funding efficiently. Crowdsourcing platforms are complex; clients may not fully understand how they work. Workers' relationships to crowdwork are diverse—as are their expectations about appropriate client behavior. Clients may be unaware of these expectations. Some platforms prime clients to expect cheap, "frictionless" completion of work without oversight, as if the platform were not an interface to human workers but a vast computer without living expenses. But researchers have learned that workers are happier and produce better work when clients pay well, respond to worker inquiries, and communicate with workers to improve task designs and quality control processes.6 Workers have varied but undervalued or unrecognized expertise and skills. Workers on Amazon's Mechanical Turk platform ("MTurk"), for example, are more educated than the average U.S. worker.2 Many advise clients on task design through worker forums. Workers' skills offer researchers an opportunity to shift perspective, treating workers not as interchangeable subjects but as sources of insight that can lead to better research. When clients do not understand that crowdsourcing work, including research, involves interacting through a complex, error-prone system with human workers with diverse needs, expectations, and skills, they may unintentionally underpay or mistreat workers.

On MTurk, for example, clients may refuse to pay for ("reject") completed work for any reason. Rejection exists to prevent workers from cheating—for example, completing a survey with random answers. But rejection also has a secondary usage: the percentage of tasks a worker has had "approved"—that is, the percentage of tasks their clients chose to pay for—is interpreted as a proxy for worker quality, and used to automatically screen workers for tasks. A worker's "approval rate," however, can be negatively affected by client errors in quality control, compromising workers' eligibility for other tasks. MTurk offers workers no way to contest rejections and no information about a client's rejection history. Clients can screen workers based on a form of "reputation," but not the reverse.

These dynamics seem especially relevant for workers who rely on crowdwork as a primary or significant secondary source of income. While some readers may be surprised to hear that people earn a living through crowdwork, research shows this is increasingly common, even in rich countries. In a 2015 International Labour Organization survey of MTurk workers (573 U.S. respondents), 38% of U.S. respondents said crowdwork was their primary source of income, with 40% of these (15% of U.S. respondents) reporting crowdwork as their only source of income.2 In a 2016 Pew survey of 3,370 MTurk workers, 25% of U.S. respondents said that MTurk specifically was the source of "all or most" of their income.5

While it is to our knowledge generally not possible to be certain how representative any survey of crowdworkers is, these findings are consistent with both other MTurk-specific research and recent national surveys of online labor platform activity broadly—which includes "microtasking" platforms (such as MTurk), platforms for in-person work (such as Uber), and platforms for remote work (such as Upwork). For example, Farrell and Greig3 found that overall the "platform economy was a secondary source of income," but that "as of September 2015, labor platform income represented more than 75% of total income for 25% of active [labor platform] participants," or approximately 250,000 workers.a

With crowdwork playing an economically important role in the lives of hundreds of thousands—or millions—of people worldwide, we ask: What are the responsibilities of clients and platform operators?

Crowdsourcing is currently largely "outside the purview of labor laws"8—but only because most platforms classify workers as "independent contractors," not employees. "Employees" in the U.S. are entitled to the protections of the Fair Labor Standards Act—minimum wage and overtime pay—but contractors are not. (Many countries have similar distinctions.) While this legal classification is unclear and contested, and there is growing recognition that at least some crowdworkers should receive many or all protections afforded employees (including Salehi et al.,9 Michelucci and Dickinson,8 and Berg2), these intentions have not yet been realized.

Our own research, which we have asked researchers to stop citing10 and will therefore not cite here, has been used to justify underpayment of workers. Reporting on MTurk demographics in 2008–2009, we reported that workers responding to our survey earned on average less than $2/hour. This figure has been cited by researchers to justify payment of similar wages.

Our (now outdated) descriptive research, which reported averages from a sizable but not necessarily representative sample of MTurk workers, was not an endorsement of that wage. Additionally, eight years have passed since that study—it should not be used to orient current practice.

Therefore, we build on a long-running conversation in computing research on ethical treatment of crowdworkers (for example, Bederson and Quinn1) by offering the following high-level guidelines for the treatment of paid crowdworkers in research.

Pay workers at least minimum wage at your location. Money is the primary motivation for most crowdworkers (see, for example, Litman et al.6 for MTurk). Most crowdworkers thus relate to paid crowdwork primarily as work, rather than as entertainment or a hobby; indeed, as noted previously, a significant minority rely on crowdwork as a primary income source. Most developed economies have set minimum wages for paid work; however, the common requirement (noted earlier) that workers agree to be classified as independent contractors allows workers to be denied the protections afforded employees, including minimum wage.

Ethical conduct with respect to research subjects often requires researchers to protect subjects beyond the bare minimum required by law; given the importance of money as a motivation for most crowdworkers, it is ethically appropriate to pay crowdworkers minimum wage. Further, workers have requested this (Salehi et al.9). Ethics demands we take worker requests seriously.


To make crowdsourced research possible, researchers and IRBs must develop ongoing, respectful dialogue with crowdworkers.


While crowdworkers are often located around the world, minimum wage at the client's location is a defensible lower limit on payment. If workers are underpaid, for example, due to underestimation of how long a task might take, correct the problem (for instance, on MTurk, with bonuses). On MTurk, if workers are refused payment mistakenly, reverse the rejections to prevent damage to the workers' approval rating. Note that fair wages lead to higher quality crowdsourced research.6

Remember you are interacting with human beings, some of whom complete these tasks for a living. Treat them at least as well as you would treat an in-person co-worker. As workers themselves have gone to great lengths to express to the public,4 crowdworkers are not interchangeable parts of a vast computing system, but rather human beings who must pay rent, buy food, and put children through school—and who have, just like clients, career and life goals and the desire to be acknowledged, valued, and treated with respect.

Respond quickly, clearly, concisely, and respectfully to worker questions and feedback via both email and worker forums (for example, turkernation.com, mturkcrowd.com). In addition to being a reasonable way to engage with human workers, this engagement may also improve the quality of the work you receive, since you may be informed of task design problems before a great deal of work has been done—and before you have incurred a responsibility to pay for that work, which was done in good faith.

Learn from workers. If workers tell you about technical problems or unclear instructions, address them promptly, developing workarounds as needed for workers who have completed the problematic task. Especially if you are new to crowdsourcing, you may unknowingly be committing errors or behaving inappropriately due to your study design or mode of engagement. Many workers have been active for years, and provide excellent advice. Workers communicate with one another and with clients in forums (as described earlier); MTurk workers in particular have articulated best practices for ethical research in the Dynamo Guidelines for Academic Requesters (guidelines.wearedynamo.org; Salehi et al.9).

Currently, the design of major crowdsourcing platforms makes it difficult to follow these guidelines. Consider a researcher who posts a task to MTurk, and after the task is posted, discovers that even expert workers take twice as long as expected. This is unsurprising; recent research shows that task instructions are often unclear to workers. If this researcher wishes to pay workers "after-the-fact" bonuses to ensure they are paid the intended wage, this can only be done one-by-one or with command-line tools. The former is time-consuming and tedious; the latter is only usable for a relative minority of clients. The platform's affordances (or non-affordances) are powerful determiners of how clients (are able to) treat workers. We suggest platform operators would do workers, clients, and themselves a service by making it easier for clients to treat workers well in these cases.

Finally, we call on university Institutional Review Boards to turn their attention to the question of responsible crowdsourced research. Crowdworkers relate to their participation in crowdsourced research primarily as workers. Thus the relation between researchers and crowdworkers is markedly different than researchers' relation to study participants from other "pools." While there may be some exceptions, we thus believe researchers should generally pay crowdworkers at least minimum wage. We urge IRBs to consider this position.

These suggestions are a start, not a comprehensive checklist. To make crowdsourced research responsible, researchers and IRBs must develop ongoing, respectful dialogue with crowdworkers.

Back to Top

Further Reading

For detailed treatment of ethical issues in crowdwork, see Martin et al.7 For alternatives to MTurk, see Vakharia and Lease11 or type "mturk alternatives" into any search engine. Readers interested in ethical design of labor platforms should seek recent discussions on "platform cooperativism" (for example, platformcoop.net).

Back to Top

References

1. Bederson, B. and Quinn. A.J. Web workers unite! Addressing challenges of online laborers. In Proceedings of CHI '11 EA (2011), 97–106.

2. Berg, J. Income security in the on-demand economy: Findings and policy lessons from a survey of crowdworkers. Comparative Labor Law & Policy Journal 37, 3 (2016).

3. Farrell, D. and Greig, F. Paychecks, paydays, and the online platform economy: Big data on income volatility. JP Morgan Chase Institute, 2016.

4. Harris, M. Amazon's Mechanical Turk workers protest: 'I am a human being, not an algorithm.' The Guardian (Dec. 3, 2014); http://bit.ly/2EcZvMS.

5. Hitlin, P. Research in the crowdsourcing age, a case study. Pew Research Center, July 2016.

6. Litman, L., Robinson, J., and Rosenzweig, C. The relationship between motivation, monetary compensation, and data quality among U.S.- and India-based workers on Mechanical Turk. Behavior Research Methods 47, 2 (Feb. 2015), 519–528.

7. Martin, D. et al. Turking in a global labour market. Computer Supported Cooperative Work 25, 1 (Jan. 2016), 39–77.

8. Michelucci, P. and Dickinson, J.L. The power of crowds. Science 351, 6268 (2016), 32–33.

9. Salehi, N. et al., Eds. Guidelines for Academic Requesters—WeAreDynamo Wiki (2014); http://bit.ly/1q6pY33.

10. Silberman, M. et al. Stop citing Ross et al. 2010, 'Who are the crowdworkers?'; http://bit.ly/2FkrObs.

11. Vakharia, D. and Lease, M. Beyond Mechanical Turk: An analysis of paid crowd work platforms. In Proceedings of iConference 2015. (2015).

Back to Top

Authors

M. Six Silberman ([email protected]) works in the Crowdsourcing Project at IG Metall, 60329 Frankfurt am Main, Germany.

Bill Tomlinson ([email protected]) is a Professor in the Department of Informatics at the University of California, Irvine, CA, USA, and a Professor in the School of Information Management, Victoria University of Wellington, New Zealand.

Rochelle LaPlante ([email protected]) is a professional crowdworker, Seattle, WA, USA.

Joel Ross ([email protected]) is a Senior Lecturer in the Information School, University of Washington, Seattle, WA, USA.

Lilly Irani ([email protected]) is an Assistant Professor in the Communication Department and Science Studies Program, University of California, San Diego, CA, USA.

Andrew Zaldivar ([email protected]) is a Researcher in the Department of Cognitive Sciences, University of California, Irvine, CA, USA (now at Google).

Back to Top

Footnotes

a. Farrell and Greig3 report that 0.4% of adults "actively participate in" (receive income from) labor platforms each month. ("Labor platforms" here include both platforms for in-person work such as Uber as well as platforms for remote work such as MTurk and Upwork.) Per the CIA World Factbook, the U.S. total population is 321,369,000, with approximately 80.1% "adult" ("15 years or older"). Therefore the number of U.S. adults earning more than 75% of their income from labor platforms is approximately 0.25 * 0.004 * 0.801 * 321369000, or 257,415. "Adults" is interpreted by Farrell and Greig as "18 years or older," not "15 years or older," so we round down to 250,000.

This material is based upon work supported in part by National Science Foundation Grant CCF-1442749. The authors thank Janine Berg and Valerio De Stefano for comments. This Viewpoint reflects the authors' views, not any official organizational position.


Copyright held by authors.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: