acm-header
Sign In

Communications of the ACM

BLOG@CACM

What Should be Done About Facebook?


View as: Print Mobile App Share:
Carnegie Mellon University Associate Professor Jason Hong

Credit: Carnegie Mellon University

The recent release of the Facebook papers by a whistleblower has confirmed that leaders at the company have long known about problems facilitated by their social media, including disinformation, misinformation, hate speech, depression, and more. There's been a lot of talk about regulators stepping in, with Facebook perhaps allowing them to inspect their algorithms for prioritizing and recommending content on Instagram and for Newsfeed. How can we as computer scientists and technologists help here? What kinds of questions, insights, or advice might our community offer these regulators?

Opening Up the Algorithms Isn't Enough

The first piece of advice I would offer is that, yes, Facebook should open up its algorithms to regulators, but that's nowhere near enough. If regulators want to stem the spread of things like hate speech, disinformation, and depression, they also need to take a close look at the processes Facebook uses to develop products and the metrics they measure.

There will probably some parts of Facebook's algorithms that are understandable, e.g. code that blocks specific web sites or prioritizes sponsored posts that businesses have paid for. However, it's likely that the core of Facebook's algorithms use machine learning models that are not inspectable in any meaningful way. Most machine learning algorithms are large N-dimensional matrices that our poor primitive brains have no chance to comprehend.

It's also very likely that Facebook takes into account hundreds of factors to build their machine learning models, such as recency of a post, number of likes, number of likes from people similar to you, emotional valence of the words in the post, etc. None of those factors are obviously wrong or directly linked to disinformation or other problems Facebook is facing.

From what was disclosed by the whistleblower and from what other researchers have investigated, one of the key issues is that Facebook's algorithms seem to prioritize posts that are likely to have high engagement. The problem is that posts that get people angry tend to have really high engagement and spread quickly, and that includes posts about politics, especially misinformation and disinformation. But you can't just block posts that have high engagement, because that also includes posts about the birth of new babies, graduations, marriages, as well as factual news that may be highly relevant to a community.

As such, opening up the algorithms is a good and important first step, but is by no means sufficient.

Facebook has "break the glass" measures to slow the spread of misinformation and extremism. Why aren't these features turned on all the time?

Facebook has previously disclosed that it has several safety measures that were turned on during the 2020 election period in the USA. While details are not entirely clear, it seems that this safe mode prioritizes more reliable news sources, slows the growth of political groups that share a lot of misinformation, and reduces the visibility of posts and comments that are likely to incite violence.

If Facebook already has these measures, why aren't they turned on all the time? If Facebook already knows that some of their online groups spread extremism and violence, why aren't they doing more to block them? Hate speech extremism, misinformation, and disinformation aren't things that only appear during election season.

Facebook is a Large-Scale Machine Learning Algorithm Prioritizing Engagement

The Facebook papers suggest that leadership at Facebook prioritizes engagement over all other metrics. From a business model perspective, this makes sense, since engagement leads to more time spent on site, and thus more ads that can be displayed, and thus higher revenue. One can think of the entire company itself as a large-scale machine learning algorithm that is optimizing its products and features primarily for engagement.

Part of the problem here is that things like engagement are easy to measure and clearly linked to revenues. However, engagement overlooks other important things, for example well-being of individuals, or thriving, feeling supported, feeling connected and well informed. These are much harder metrics to measure, but one could imagine that if product teams at Facebook prioritized these or other similar metrics, we would have a very different kind of social media experience, one that is much more likely to be positive for individuals and for society.

Facebook can't be fully trusted to police itself. It needs to: Open Up Its Algorithms and Platform to Qualified Researchers

The problem with the metrics I proposed above is that I doubt it's possible to force a company to use new kinds of metrics. Instead, what I would recommend is that Facebook has to open up its algorithms, metrics, and platform to a set of qualified researchers around the world.

Facebook has repeatedly demonstrated that it can't police itself, and has shut down many external efforts aiming to gather data about its platform. A one time examination of Facebook also isn't likely to change their direction in the long term either. Furthermore, regulators do not have the expertise or resources to continually monitor Facebook. Instead, let's make it easier for scientists and public advocacy groups to gather more data and increase transparency.

That is, internally, Facebook will probably still try to prioritize engagement, but if product teams and the Board of Directors also see public metrics like "hate speech spread" or "half life of disinformation" published by researchers, they would be forced to confront these issues more.

Now, these scientists and public advocacy groups would need a fair amount of funding to support these activities. There are also many questions of who would qualify, how to ensure data integrity and prevent accidental leakage of sensitive data, and how to ensure transparency and quality of the analyses. However, this approach strikes me as the best way of changing the incentives internally within Facebook.

Conclusion

Facebook opening up their algorithms to regulators isn't enough. Instead, my advice is to look at their existing safety measures and to require Facebook open up their platform to qualified researchers. I will now pass the question to you: what questions and advice would you offer regulators?

 

Jason Hong is a professor in the School of Computer Science and the Human Computer Interaction Institute of Carnegie Mellon University.


Comments


Yuan Tian

Great post! I agree with the suggestion that opening up the algorithms to the regulators isn't enough. More transparency to the regulators, researchers, and users will help improve the social platforms. Even with Facebook opening up their algorithms and platform, It would still be challenging to bridge the gap between the high-level desired policies for social good, and low-level algorithm design and software implementations. As a security researcher, I see great opportunities for interdisciplinary collaborations and responsibilities for shaping better future social networks for our next generations. When drafting the regulations for social networks, the regulators might want to hear the voices of computer science researchers, social science researchers, developers, and users to define enforceable regulations.


Displaying 1 comment

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account