acm-header
Sign In

Communications of the ACM

BLOG@CACM

The Artificialistic Fallacy


View as: Print Mobile App Share:
Robin K. Hill, University of Wyoming

Last month, an automated document-scanning process rejected my curriculum vitae when it encountered the text "Blog@CACM"—yes, the name of the very publication that you are enjoying—because that string of characters was "not a valid email address." You and I, reader, being literate humans, would not have choked on that peculiarity, but would have recognized the use of the '@' symbol for a novel purpose. Such a trite example... the very triteness of which indicates the problem. As far as I can tell, this kind of thing happens all the time, yet automated document-scanning and other AI systems are viewed as the coming thing, and as a good thing.

This turn to AI for clerical tasks has raised concerns, of course, accompanying more serious concerns about reliance on AI for professional tasks, such as prediction and recommender systems. Yet the prevailing tone in the popular press remains that all will be well as soon as we figure it out: artificial intelligence is upon us, and we're better off.

A closer look shows that many casually justify Tech's influence by an appeal to current and future Tech influence. Does that seems dubious? Philosophy can explain why. In the study of ethics, the naturalistic fallacy is the tendency to take what is as what ought to be. Overt expressions of this fallacy such as "It's acceptable for men to be aggressive because the male is more aggressive in nature" enjoy popularity along with covert expressions such as "That's the way of the world" and "Your standards are wishful thinking" and "You can't get in the way of progress," which uphold the status quo as right and proper, and insinuate it into the morally right and ethically proper. The reasoning sometimes exhibits two steps, (1) that what manifests progress is inevitable, (2) that the inevitable is justified. (This assumption exhibits an interesting tension with another casual assumption, that change, unqualified "change," is good.)

The study of deriving "ought" from "is", the shorthand description in philosophy, is a venerable and manifold subject. Complications abound; the reader can consult the Stanford Encyclopedia of Philosophy [SEP NonNat 2018] for those, as we will ignore them here. I will simply object to the implicit appeal to "how things are" to warrant how things are (as well as the appeal to "how things aren't" to warrant how things aren't). In his Principia Ethica, G.E. Moore wrote a well-known exposition of this fallacy, stating it particularly in terms of the theory of evolution: "This is the view that we ought to move in the direction of evolution simply because it is the direction of evolution" [Moore 1903, emphasis his].

The flavor of naturalistic fallacy discussed here addresses technology and its general endorsement of progress, and rests comfortably on ambiguous connotations of "progress." In the context of artificial intelligence, we can call this the Artificialistic Fallacy: AI systems are progress, and so they are good. Because most people recognize this fallacy as soon as it's pointed out, direct statements of it are hard to find; indirect statements, not so hard. From the Wolfram company website: "The rise of computation has been a major world theme for the past 50 years. Our goal is to provide the framework to let computation achieve its full potential in the decades to come: to make it possible to compute whatever can be computed, whenever and wherever it is needed, and to make accessible the full frontiers of the computational universe." [Wolfram] The fifty-year presence of computation constitutes the "is." The drive to make more computation possible constitutes the "ought." Such statements proliferate in public discourse, and in the halls of Tech. I recently heard successive computer science faculty candidates introduce their work as certain technical progress and therefore clear social good.

Let's acknowledge that people are free to derive their morality from nature (or artifice), but they must adopt that as a premise somehow in order to avoid the fallacy. It doesn't come naturally, so to speak; the prescriptive cannot be derived by logic from the descriptive. And let's acknowledge that many discussions of Tech, admiring and critical, do not exhibit any such fallacy. Many well-intentioned AI developers simply believe that AI will prove of great benefit to society, making it worthwhile to pursue and improve. That may not be true, but, as a straightforward assertion, it's not a fallacy. And, while we're at it, let's acknowledge that many AI systems are downright useful and constructive. But let's reject, forthrightly, automated document-scanners, recidivism predictors, and security assessors. Let's simply pass up AI systems that do more harm than good.

The artificialistic fallacy is often committed by omission, in the overlooking or deferral of the entire issue. If I am not mistaken, in the successive AI Now reports— 2016, 2017, and 2018—the authors are increasingly alarmed by this, their recommendations moving from opening up research to monitoring AI systems to regulation and governance [AI Now]. Questioning whether the "is" can be called "good" does not mean much if we can't just reject the "is" option.

Ethics institutes burgeon with the objective of refining AI rather than the objective of assessing AI. See, for example, the Responsible Computer Science Challenge [ResponsCS 2018], which calls for more ethics in development but stops short of mentioning any restriction on commerce. This point is made by Greene, Hoffman, and Stark, in a study of values statements published by AI institutions, comprising non-profit, corporate, and academic membership. They note that the emphasis is placed on fixing AI so that its full advantages can be obtained without resistance: "...edicts to do something new are framed as moral imperatives, while the possibility of not doing something is only a suggestion, if mentioned at all" [Greene 2019].

It is the rare statement in any of these contexts that forefronts the option of turning down an AI system, and I believe that, although evil intentions or deliberate greed may play a part, the artificialistic fallacy looms large. This heavy weight to hang on the misinterpretation of an '@' should be taken as a warning from Moore to repudiate "...that the forces of Nature are working on that side is taken as a presumption that it is the right side" [Moore 1903, Sec. 34].

References

[AI Now] AI Now Publications. The AI Now Institute. Accessed 23 March 2019.

[Greene 2019] Greene, D., Hoffmann, A.L., & Stark, L. 2019. Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Hawaii International Conference on System Sciences (HICSS). Maui, HI. 2019

[Moore 1903] G. E. Moore. Principia Ethica. 1903. Cambridge University Press. Revised edition published 1993.

[ResponsCS 2018] Mozilla. 2018. Responsible Computer Science Challenge. The Mozilla Blog. October 10, 2018.

[SEP NonNat 2018] Michael Ridge. Moral Non-Naturalism. The Stanford Encyclopedia of Philosophy . Spring 2018 Edition. Edward N. Zalta, ed.

[Wolfram] About Wolfram Research. No author given. Wolfram Company. Accessed 26 March 2019.

 

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account