acm-header
Sign In

Communications of the ACM

BLOG@CACM

Ethical Theories Spotted in Silicon Valley


View as: Print Mobile App Share:
Robin K. Hill, University of Wyoming

Ethics may be the study most popularly associated with philosophy. What does ethics in the philosophy of computer science tell us about current issues in high tech? My post last month on fake news in social media [Hill 2017] was applied ethics, an attempt at reasoning toward what should be done, rather than an examination of general principles. Traditional philosophical ethics does not tell us what decisions to make; rather, it offers ways to figure out what decisions to make. Let's mark out some of the best-known approaches, in a sketchy way. They are not expressed in parallel terms, because they do not represent a partition of choices; each takes a slightly different perspective on the Right Thing, and they overlap.

Utilitarianism: The state of affairs to strive for is that which contributes the most to overall welfare.

This appeals to our sense of fairness. But it leads to problems including aggregation, in which individual interests can be trampled. It seems unsatisfactory, for instance, when a action might benefit rich people a great deal at the expense of a couple of struggling poor people.

Deontology: We find the best thing to do in predetermined standards of right and wrong applied to people's actions, that is, duties.

This appeals because it focuses on responsibility, as we think ethics should. But adherence to duty can lead to actions and outcomes that we abhor, because it ignores the particular circumstances; for example, a duty to tell the truth does not allow for deception that would save feelings or even lives.

Virtue ethics: Each of us should strive to be a good person, according to some ideal; doing right will follow from striving toward that standard.

The trouble here is that the smooth execution of the Right Thing from the securing of virtues seems tenuous; no guidance on particular action is given and outcomes do not get much attention.

Consequentialism: Whether an act is right or wrong is given by the value of the resulting state of affairs.

But this often means sacrificing someone, in medical research, for example, in order to cure other patients' conditions. In other words, the ends justify the means. This instrument strikes us as too blunt.

Contractualism: Moral prescriptions or proscriptions are rationally constructed by society and imposed via a social compact with its members, even if implicit or involuntary.

This obviates many objections to the other theories, but seems to ignore beneficence and other "natural" qualities that many would consider to bear active moral worth.

There is more, much more. Quick online references include the Internet Encyclopedia of Philosophy entry on "Ethics," [IEP] and various entries in the Stanford Encyclopedia of Philosophy [SEP]. Naturally, the interested reader can learn more about ethical theories from any philosophy textbook or in a philosophy class at a local college.

Now let's spot the glints of these theories in the sunny landscape of Silicon Valley. Take consequentialism, for instance: The very problem of the spread of fake news and its influence presents bad consequences such as ill-informed decisions and misguided actions, the downside of a practice that we would, on the theory of consequentialism as defined above, adjudge to be wrong. That practice is social media's support for the wanton creation and sharing of fake news. A full analysis would require us to measure all the consequences, of course, bad and good (including protection of free speech), and assess the action in terms of the total picture.

The perspective of contractualism might help determine our stance toward new arrangements that burst forth from the tech world without public review: software licenses, Google Earth surveillance, cars without drivers. Certainly any ethical theory could address issues raised by these developments; contractualism gives the view of the people more immediate prominence than do the others. We would first have to determine whether these arrangements violate principles that are reasonable to, and justifiable by, those in our society. That may not seem like progress, but at least it poses a question for investigation.

The most intriguing case here—the most entangled in the culture of Silicon Valley—is the manifestation of a different theory, virtue ethics, which we can trace through the development of social networking, as well as other high-tech enterprises. In the beginning, entrepreneurs assumed that the virtues of sharing, open communication, assistance to humans, and connection would ensure the triumph of the Good—that those values, embedded in people and projects, would bring about the Right Thing. Facebook, through Mark Zuckerberg, has explicitly adopted this chain of reasoning [Levy 2013]. Others have made similar statements: "Being digital is an egalitarian phenomenon. It makes people more accessible and allows the small, lonely voice to be heard in this otherwise large, empty space" [Bass 1995]. Yet the result has not been the triumph of the Good. The results have been mixed, with the bad effects verging on the horrifying. Observers have described the general tension between the Internet's promise and its manifestations [Jeffries 2014, Vardi 2017], and said of fake news, "This reality is at odds with Facebook's vision of a network where people connect and share important information about themselves and the world around them" [Silverman 2016]. Facebook was mistaken in the expectation that solutions to the world's problems would be automatic.

In the face of this apparent failure of virtue ethics, we might choose to (a) reject the theory, or (b) reject the purported virtues, or (c) reject the premise that the agent's strategies actually strive for those virtues. Case (a) not only poses a challenge to the very idea of virtue ethics, but does so here in a scenario where the scale is huge, which would lend strength to a conclusion about the efficacy of virtue ethics. Case (b) proposes that connection, sharing, and openness are not completely admirable characteristics, which would call for reconsideration of many of our ideals; but perhaps not such a deep reconsideration if we confine those virtues to their expression via the Internet. Case (c) requires us to acknowledge, first, that the agents involved are institutions, groups of people organized for a purpose, and that the purpose includes profit, a purpose often found to be in conflict with the Right Thing. The virtues that we associate with the Good routinely exhibit a drift, in the business world, toward the Good-of-the-Company or the Good-of-the-Executive (and not just in high tech). Yet, insofar as virtue ethics does not focus on action, or attempt to formulate universal rules independent of the agent, the situation presents aspects that call for careful unpacking.

I am tempted to say facetiously that the solution is left to the reader. Tackling any of these questions would be a respectable philosophical task; often, a thorough articulation of the issue itself serves a purpose. Answers, or guidance, at least, may come from sources yet unexplored.

References

[Bass 1995] Bass, Thomas. November 1 1995. Being Nicholas. WIRED.

[Hill 2017] Hill, Robin. February 26 2017. Fact and Frivolity in Facebook. Blog@CACM.

[IEP] Fieser, James. Internet Encyclopedia of Philosophy. Ethics. Accessed March 8 2017. ISSN 2161-0002.

[Jeffries 2014] Jeffries, Stuart. August 24 2014. How the web lost its way—and its founding principles. The Guardian.

[Levy 2013] Levy, Steven. August 26 2013. Zuckerberg Explains Facebook's Plan to Get Entire Planet Online. WIRED.

[Silverman 2016] Silverman, Craig. October 26 2016. Here's Why Facebook's Trending Algorithm Keeps Promoting Fake News. Buzzfeed.

[SEP] Stanford Encyclopedia of Philosophy. The Stanford Encyclopedia of Philosophy. Edward N. Zalta, ed.

[SEP Virtue Ethics] Hursthouse, Rosalind and Pettigrove, Glen. Virtue Ethics.The Stanford Encyclopedia of Philosophy (Winter 2016 Edition).

[Vardi 2017] Vardi, Moshe. 2017. Technology for the Most Effective Use of Mankind. Communications of the ACM 60:1.

 

Robin K. Hill is adjunct professor in the Department of Philosophy, and in the Wyoming Institute for Humanities Research, of the University of Wyoming. She has been a member of ACM since 1978.


Comments


Cassidy Alan

Reaction to the ACM blog by Robin K. Hill on Ethical Theories Spotted in Silicon Valley:
https://cacm.acm.org/blogs/blog-cacm/214615-ethical-theories-spotted-in-silicon-valley/fulltext#comments

When there is no common consensus agreement among interested parties on a standard for ethics in the philosophy of computer science, then of course there will be confusion as to the application of same. Since there is so much diversity of general approaches to what society should do, ethics in computing will stay unclear, just as will ethics in anything.

There once was a modicum of cultural agreement at least.

Yours was a good categorization of various ethics theories.

I have a proposal as a beginning base for ethics that borrows from the following:

(1) the basic (small-l) libertarian principle known as the non-aggression principle;
(2) Similar to point #1, Do no harm.
(3) the Golden Rule of Christianity: Do unto others as you would have them do unto you. Most religions have something like this, and almost all of them incorporate it as part of their basic rules of conduct (except those involving human sacrifice of course and others with exceptions).

The best definition of #1 is found here:

Principle of non-aggression:
https://wiki.mises.org/wiki/Principle_of_non-aggression

The non-aggression principle (also called the non-aggression axiom, or the anti-coercion or zero aggression principle or non-initiation of force) is an ethical stance which asserts that "aggression" is inherently illegitimate. "Aggression" is defined as the "initiation" of physical force against persons or property, the threat of such, or fraud upon persons or their property. In contrast to pacifism, the non-aggression principle does not preclude violent self-defense. The principle is a deontological (or rule-based) ethical stance.

The only way to live peaceably in a society with other people is with people that follow a similar set of basic rules of conduct that fall under the scope of culture. (This is why suddenly throwing together of people of disparate and sometimes incompatible cultures results in so much turmoil.)

The first two principles above can be considered the minimum common denominator in most philosophies of society and government. Government is both a poor example and a poor enforcer of ethics rules, however, because it sets up an immediate conflict of interest. It is "against interest", lawyers call it, for a government official to diminish his own authority or advantage in rule-setting; therefore, his rulings will always protect his own interests first.

Those three points should give us a good starting point.


Robin Hill

Yes, as I understand it, too, the Golden Rule is quite common in ethics of diverse religions and cultures. A hopeful sign, perhaps! The sticky part is its application in domains like business, which holds specific pragmatic goals. For example, under a professional commitment is to increase shareholder profits, what does the Golden Rule say about marketing a flashy product to people who should be spending that money on family needs?

We can all agree that aggression (nice definition!) is bad. We need to figure out what counts as aggression in a high-tech business, or in any business, in order to clarify our shared rules of conduct.


Displaying all 2 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account