acm-header
Sign In

Communications of the ACM

ACM News

The People Paid to Train AI are Outsourcing Their Work…to AI


View as: Print Mobile App Share:

The study highlights the need for new ways to check whether data has been produced by humans or AI.

Credit: Stephanie Arnett/MIT Technology Review/Getty

A significant proportion of people paid to train AI models may be themselves outsourcing that work to AI, a new study has found. 

It takes an incredible amount of data to train AI systems to perform specific tasks accurately and reliably. Many companies pay gig workers on platforms like Mechanical Turk to complete tasks that are typically hard to automate, such as solving CAPTCHAs, labeling data and annotating text. This data is then fed into AI models to train them. The workers are poorly paid and are often expected to complete lots of tasks very quickly. 

Large language models are full of security vulnerabilities, yet they're being embedded into tech products on a vast scale.

No wonder some of them may be turning to tools like ChatGPT to maximize their earning potential. But how many? To find out, a team of researchers from the Swiss Federal Institute of Technology (EPFL) hired 44 people on the gig work platform Amazon Mechanical Turk to summarize 16 extracts from medical research papers. Then they analyzed their responses using an AI model they'd trained themselves that looks for telltale signals of ChatGPT output, such as lack of variety in choice of words. They also extracted the workers' keystrokes in a bid to work out whether they'd copied and pasted their answers, an indicator that they'd generated their responses elsewhere.

From MIT Technology Review
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account