acm-header
Sign In

Communications of the ACM

ACM Careers

Responsible AI has a Burnout Problem


View as: Print Mobile App Share:
Artist's representation of burnout

Tech companies such as Meta have been forced by courts to offer compensation and extra mental-health support for employees such as content moderators, who often have to sift through graphic and violent content that can be traumatizing.

Credit: Stephanie Arnett/MITTR | Unsplash

Margaret Mitchell had been working at Google for two years before she realized she needed a break.

"I started having regular breakdowns," says Mitchell, who founded and co-led the company's Ethical AI team. "That was not something that I had ever experienced before."

Only after she spoke with a therapist did she understand the problem: she was burnt out. She ended up taking medical leave because of stress.

Mitchell, who now works as an AI researcher and chief ethics scientist at the AI startup Hugging Face, is far from alone in her experience. Burnout is becoming increasingly common in responsible-AI teams, says Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible-AI consultant at Boston Consulting Group. 

Companies are under increasing pressure from regulators and activists to ensure that their AI products are developed in a way that mitigates any potential harms before they are released. In response, they have invested in teams that evaluate how our lives, societies, and political systems are affected by the way these systems are designed, developed, and deployed.

From MIT Technology Review
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account