acm-header
Sign In

Communications of the ACM

BLOG@CACM

FictionStein


View as: Print Mobile App Share:
Robin K. Hill, University of Wyoming

The 200th anniversary of the publication of "Frankenstein: or, a Modern Prometheus" is observed this year in many ways, including its assignment to my first-year seminar in computer science, for comparison to more modern instances of the unintended consequences of technology. In Dr. Frankenstein's case, we can't examine the details of the technology; Mary Shelley directly obviates any necessity to make up bogus science by having him tell us straightforwardly that his method is simple, but known only to him, and it will stay that way.

Of course, it's simply preposterous that a crazy quilt of cadaver parts could move and talk, learn and reason, yet that general narrative, leading to unfortunate events with poignant overtones, has gripped readers for centuries. Other stories play the same game-- Pygmalion, Terminator, Ex Machina; see the helpful Wikipedia "List of fictional robots and androids" [Wiki Androids]. A key feature of these characters is the human needs that belie their unnatural deliveries, and thereby create dramatic tension. Perhaps HAL, the spaceship's control computer in the book and film "2001: A Space Odyssey" [Clarke 1968] is the rare fictional humanoid that is not in some way overlaid with human desires for the purpose of the story, remaining literal, although its voice endows even HAL with a compelling human presentation.

No thinking person takes fiction to be truth. But we may, in our entertainment mode, in our sensation mode, in our not-thinking-very-hard modes, take fiction to be an platform of possibility. Science fiction not only illustrates counterfactuals, but makes them familiar; it cultivates counterfactual memes that we may grow to accept, to believe. See a previous blog on the inverse [Hill 2016], in which I note that science fiction uses model theory (implicit and less than complete) to lend plausibility to its worlds. Here, we have life not just imitating, but swallowing, art. What worries me in the humanoid story is not just the trope of revenge of the artifacts, but the very notion that artifacts will want revenge, or will want anything at all, or will have any sort of affect whatever.

For anyone with a tendency to generalize, as people do, the message of these popular works is partly that humans can be created from materials at hand. The message is also that what looks enough like us is like us. As soon as a creature acquires limbs, then hands, then eyes... then all the blanks will be filled in automatically. Our artifacts will gain desire, regret, loyalty, and affection. Now that the resemblance to humans has shifted from external appearance to internal acumen, in AI, parallel misconceptions arise [Darwiche 2018]. The public seems to be extrapolating to the view that a programming system will fill in all the blanks as soon as it acquires some of the trappings of human reasoning. Because our "smart" products exhibit some systematic ratiocination, they must be acquiring real intelligence. No, they aren't. They really aren't. Robots, as we know them now, and as we can conceive of them on a line of development from the current state, are not people. They really aren't.

The New York Times article on Frankenstein at 200 surveys the many political, social, academic, and rhetorical purposes into which the doctor and his creature have been pressed, and quotes scholar Ed Finn calling a simplistic view of the narrative "dangerous." Says Finn, "A better conversation about Frankenstein would focus on the deep connection between scientific creativity and our responsibility to ourselves and one another." [Schuessler] Yes, please.

That conversation is sometimes delegated to courses in the ethics of computing, about which my misgivings have already been expressed [Hill 2018]. In the CACM of August, Burton et al. describe an ethics course based on science fiction texts [Burton 2018]. This course sounds great; I'd love to take it. Burton and her co-authors and co-teachers are conscientious and comprehensive in their cultivation of moral imagination; their purpose is to raise, describe, and explore quandaries. Indeed, ethics starts with (1) how to ask the right questions and (2) how to apply the theories to those questions, comprising the open-ended treatment sought. But IRL, we must also (3) decide and (4) act. These steps have to take place in the real world. Many computing ethics classes seem to stop short at (1) forumulating questions and (2) applying theories to those questions. As Burton says, "The goal of teaching ethics is to foster the debates and equip practitioners to participate productively." [Burton, Page 57] No philosopher scorns that (and no legitimate ethics educator tells students what to do). But the hypothetical supports those first two steps. And the hypothetical can be relied on only in those first two steps. The hypothetical does not automatically fill in the blanks. That's the hard part, steps (3) and (4).

My students can opine that the doctor should build self-destruct mechanisms into all his proects, or should contrive the monster's end by bombing, or should stay in Geneva and make chocolate. They can demand outrageous actions from Dr. Frankenstein. (IRL, they are more thoughtful.) They can draw any conclusion they please about what he should do, without bond or risk. He's fictional! They will never be in his position, nor will anyone else. While fiction provides vivid and rich scenarios worthy of ethical study, it allows commitment and courage to be deferred. A better conversation about our responsibility to ourselves and one another would address the commitment and courage.

References

[Burton 2018] Burton, E. et al. How to Teach Computer Ethics through Science Fiction. CACM, 61:8, 54-64 (2018).

[Clarke 1968] Clarke, A. 2001: A Space Odyssey, 1968.

[Darwiche 2018] Darwiche, A. Human-Level Intelligence or Animal-Like Abilities? CACM, 61:10, 56-67 (2018).

[Hill 2016] Hill, R. Fiction as Model Theory. Blog@CACM, December 30, 2016.

[Hill 2018] Hill, R. Tech Ethics at Work. Blog@CACM, January 29, 2018.

[Wiki Androids] Wikipedia contributors. (2018, October 31). List of fictional robots and androids. Wikipedia, The Free Encyclopedia, Retrieved November 11, 2018.

[Schuessler 2018] Schuessler, J. And Woman Created Monster. New York Times, October 28, 2018.

[Shelley 1818] Shelley, M. Frankenstein: Or, A Modern Prometheus. UK: Lackington, Hughes, Harding, Mavor & Jones, 1818.

 

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.


Comments


Leandro Carvalho

Thank you for sharing your experience. I teach a course for CS freshmen about writing. Also, I use to bring themes about ethics and recently I have been making them create and analyze graphs about pass and dropout rates at my institution. Could you share the link for the webpage of your course?


Robin Hill

Greetings, Mr. Carvalho. My blog piece of July 27th, "Lessons from a First-Year Seminar," addressed this course in more detail:
https://cacm.acm.org/blogs/blog-cacm/238427-lessons-from-a-first-year-seminar/fulltext
The syllabus is a framework rather than a document.

Because the subject is more pedagogical than philosophical, I have provided a few more details in a different blog:
https://teachingphilofcs.blogspot.com/2020/01/teaching-first-year-seminar-in-computer.html


Displaying all 2 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account