Everyone in computing is promoting ethics these days. The Vatican has issued the Rome Call for AI Ethics, which has been endorsed by many organizations, including tech companies. Facebook (now Meta) has donated millions of U.S. dollars to establish a new Institute for Ethics in Artificial Intelligence at the Technical University of Munich, since "ensuring the responsible and thoughtful use of AI is foundational to everything we do."a Google announced it "is committed to making progress in the responsible development of AI."b And last, but not least, ACM now requires nominators and endorsers of ACM award candidates attest that "To the best of my knowledge, the candidate … has not committed any action that violates the ACM Code of Ethics and ACM's Core Values."
But AI technology is the fundamental technology that underlies "Surveillance Capitalism," defined as an economic system centered on the commodification of personal data with the core purpose of profit-making. Under the mantra of "Information wants to be free," several tech companies have turned themselves into advertising companies. They have also perfected the technology of micro-targeted advertising, which matches ads with individual preferences. In Silicon Valley lingo, this business model is described as, "If you're not paying for it, you're the product." Shoshana Zuboff arguedc eloquently about the societal risk posed by surveillance capitalism. "We can have democracy," she wrote, "or we can have a surveillance society, but we cannot have both." Internet companies have mastered the art of harvesting the grains of information we share with them, using them to construct heaps of information about us. And just as the grains of information are turned into a heap of information about us, the grains of influence that Internet companies give us result in a heap of influence we are not aware of, as we learned from the Cambridge Analytica scandal. All of this is enabled by machine learning that maps user profiles to advertisements. AI is also used to moderate content for social-media users with a primary goal of maximizing user engagement, and, as a consequence, advertising revenues.
Surveillance capitalism is perfectly legal, and enormously profitable, but it is unethical, many people believe,d including me. After all, the ACM Code of Professional Ethicse starts with "Computing professionals' actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good." It would be extremely difficult to argue that surveillance capitalism supports the public good.
The tension between an unethical business model and a façade of ethical behavior creates unsustainable tension inside some of these companies. In December 2020, Timnit Gebru, a computer scientist who works on algorithmic bias, was the center of a public controversy stemming from her abrupt and contentious departure from Google as technical co-lead of the Ethical Artificial Intelligence Team, after higher management requested she withdraw an as-yet-unpublished paper, which detailed multiple risks and biases of large language models, or remove the names of all Google co-authors. This management request was described by many Googlers as "an unprecedented research censorship."f In the aftermath of Gebru's dismissal, Google fired Margaret Mitchell, another top researcher on its AI ethics team. In response to these firings, the ACM Conference for Fairness, Accountability, and Transparency (FAccT) decided to suspend its sponsorship relationship with Google, stating briefly that "having Google as a sponsor for the 2021 conference would not be in the best interests of the community."
The biggest problem that computing faces today is not that AI technology is unethical—though machine bias is a serious issue—but that AI technology is used by large and powerful corporations to support a business model that is, arguably, unethical. Yet, with the exception of FAccT, I have seen practically no serious discussion in the ACM community of its relationship with surveillance-capitalism corporations. For example, the ACM Turing Award, ACM's highest award, is now accompanied by a prize of US$1 million, supported by Google.
Furthermore, the issue is not just ACM's relationship with tech companies. We must also consider how we view officers and technical leaders in these companies. Seriously holding members of our community accountable for the decisions of the institutions they lead raises important questions. How do we apply the standard of "have not committed any action that violates the ACM Code of Ethics and ACM's Core Values" to such people? It is time for us to have difficult and nuanced conversations on responsible computing, ethics, corporate behavior, and professional responsibility.
a. https://about.fb.com/news/2019/01/tum-institute-for-ethics-in-ai/
b. https://ai.google/responsibilities/responsible-ai-practices/
The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.
No entries found