acm-header
Sign In

Communications of the ACM

ACM News

How Facebook Got Addicted to Spreading Misinformation


View as: Print Mobile App Share:
Joaquin Quionero Candela, a director of AI at Facebook.

The algorithms that underpin Facebooks business werent created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged

Credit: Winni Wintermeyer

Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump's 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook's history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, "the intersection of AI, ethics, and privacy" at the company. He considered canceling, but after debating it with his communications director, he'd kept his allotted time.

As he stepped up to face the room, he began with an admission. "I've just had the hardest five days in my tenure at Facebook," he remembers saying. "If there's criticism, I'll accept it."

The Cambridge Analytica scandal would kick off Facebook's largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump's favor. Millions began deleting the app; employees left in protest; the company's market capitalization plunged by more than $100 billion after its July earnings call.

From MIT Technology Review
View Full Article

 


 

No entries found