acm-header
Sign In

Communications of the ACM

ACM Careers

AI Content Detectors Discriminate Against Non-Native English Speakers


View as: Print Mobile App Share:
red thumbs down over word balloons, illustration

Over half of non-native English writing samples were misclassified as AI generated.

Computer programs used to detect essays, job applications, and other work generated by artificial intelligence can discriminate against people who are non-native English speakers, researchers say.

With the rise of generative AI programs, many teachers now consider AI detection as a "critical countermeasure" to deter cheating, but the reliability of the tools is uncertain, the researchers write in Patterns.

Scientists led by James Zou, an assistant professor of biomedical data science at Stanford University, ran 91 English essays written by non-native English speakers through seven popular GPT detectors to see how well the programs performed.

More than half of the essays, which were written for a widely recognized English proficiency test were flagged as AI-generated, with one program flagging 98% of the essays as composed by AI. The scientists traced the discrimination to the way the detectors assess what is human and what is AI-generated.

From The Guardian
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account