The road to the Cambridge Analytica/Facebook scandal is strewn with failures. There's the failure to protect users' privacy, the failure to protect voters, and the failure to uncover the actions and violations of laws that may well have affected the Brexit referendum and the U.S. presidential election. The latter two threaten the heart of democracy. But here I want to focus on the simpler problem: the failure to protect users' privacy. That problem has the virtue of being easy to unpack—even if its resolution is far from simple.
This failure to protect privacy has multiple parts.
First there's Facebook's failure to design its systems to protect user privacy. Indeed, the company's aim was quite the opposite. Facebook believed that "every app could be social." That meant giving broad access not only to a user's data, but also to that of his "friends." In 2013, Cambridge University researcher Aleksandr Kogan paid 270,000 Facebook users to take a personality quiz (actually, the money came from Cambridge Analytica, but that's another part of the story). Doing so gave Kogan's app the ability to "scrape" information from their profiles. In those days, Facebook's platform permitted the app not only to access the quiz takers' profiles and "scrape" information from them; the social network also allowed apps to do the same to the profiles of the quiz takers' "friends"—all 50 million of them.
The data from the friends would be collected unless the friends had explicitly prohibited such collection. The ability to do so was not easy. Users were not explicitly told that their data would be shared if a Facebook friend engaged an app that did such scraping. To prevent collection, users had to first find out that their friends' apps would do this, then configure their own profiles to prevent such data sharing. That was failure number 1.
Then there's Facebook's failure to take serious legal action after the company became aware that the data of those 50 million Facebook users had been provided to Cambridge Analytica. This data transference violated Kogan's agreement with Facebook in acquiring the data in the first place. But when Facebook found out, its action was to request that Cambridge Analytica certify they had destroyed the user files; the Silicon Valley company did not ensure that Cambridge Analytica had done so. As we know, Cambridge Analytica had not complied. Facebook's failure to ensure that the files had been destroyed was failure number 2.
Finally, there's Facebook's failure to inform the 50 million users whose data was taken. There was a breach of contract here, between Kogan and Facebook. But there was also a privacy breach: the profiles of 50 million Facebook users were being used by Cambridge Analytica, a British firm specializing in using personal data for highly targeted, highly personalized political ads. And Facebook failed to inform those 50 million users of the breach. That was failure number 3.
Now Facebook is on the way to repairing some of these failures. In 2014, Facebook limited the information apps could collect on Facebook friends—though not fully (in a recent article, New York Magazine reporter Brian Feldman shows how much information the apps can still collect). Mark Zuckerberg has said that Facebook will, belatedly, inform the 50 million Facebook users of the privacy breach that happened in 2014. But there are other failures as well.
The fourth is society's, which hasn't been taking privacy particularly seriously. This isn't universally true. In the U.S., for example, the Federal Trade Commission (FTC) and the states' Attorneys General have taken companies to court when the firms fail to protect users' privacy or fail to follow their own, publicly stated, privacy policies. But their set of tools for doing so is quite limited. There's a handful of laws that protect privacy in particular sectors. There are fines if companies fail to live up to stated privacy policies. There are data breach laws. And there's the ability to fine if actual harm has been caused.
The Facebook/Cambridge Analytica case is garnering significant attention by both the FTC and the states' Attorneys General. It helps that in 2011, the FTC acquired a powerful tool when Facebook signed a consent decree as a result of an earlier privacy violation. This decree required the company to make clear "the extent to which [Facebook] makes or has made covered information accessible to third parties." Did the fact that those 50 million "friends" had difficulty preventing collection of their data constitute a violation of the consent decree? We will find out. At a potential $40,000 per violation, the consequence could be quite expensive for Facebook.
We're not quite done; there's a critical fifth failure that may well be the most important of all. That's our willingness to trade data about ourselves for seemingly free services. That's nonsense, of course. Services that manipulate how you spend your time, how you feel, and whom you vote for are anything but free. Maybe it's time to cut that connection? Some things will be harder—seeing that photo of your nephews clowning at breakfast, or getting updates from the folks at your previous job—but you may find an extra hour in your day. That's an hour to call a friend, take a walk, or read a book. It sounds like a good trade to me. But I wouldn't actually know. I value my privacy too much, and never joined the network in the first place.
Guest blogger Susan Landau is a computer scientist and cybersecurity policy expert at the Fletcher School of Law & Diplomacy and the School of Engineering, Department of Computer Science, Tufts University.
No entries found