acm-header
Sign In

Communications of the ACM

ACM News

Are AI Ethics Teams Doomed to be a Facade?


View as: Print Mobile App Share:

Now present at companies including Google, Microsoft, IBM, Facebook, Salesforce, Sony, and more, ethics groups and boards were largely positioned as places to do important research and even act as safeguards against the companies own artificial intellige

Credit: Getty Images

The concept of "ethical AI" hardly existed just a few years ago, but times have changed. After countless discoveries of AI systems causing real-world harm and a slew of professionals ringing the alarm, tech companies now know that all eyes — from customers to regulators — are on their AI. They also know this is something they need to have an answer for. That answer, in many cases, has been to establish in-house AI ethics teams.

Now present at companies including Google, Microsoft, IBM, Facebook, Salesforce, Sony, and more, such groups and boards were largely positioned as places to do important research and even act as safeguards against the companies' own AI technologies. But after Google fired Timnit Gebru and Margaret Mitchell, leading voices in the space and the former co-leads of the company's ethical AI lab, this past winter after Gebru refused to rescind a research paper on the risks of large language models, it felt as if the rug had been pulled out on the whole concept. It doesn't help that Facebook has also been criticized for steering its AI ethics team away from research into topics like misinformation, in fear it could impact user growth and engagement. Now, many in the industry are questioning if these in-house teams are just a facade.

"I do think that skepticism is very much warranted for any 'ethics' thing that comes out of corporations," Gebru told VentureBeat, adding that it "serves as PR [to] make them look good."

 

From VentureBeat
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account